Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practice when a mutation deletes an object #899

Closed
yurigenin opened this issue Nov 12, 2016 · 75 comments
Closed

Best practice when a mutation deletes an object #899

yurigenin opened this issue Nov 12, 2016 · 75 comments
Labels
📚 good-first-issue Issues that are suitable for first-time contributors. 📝 documentation

Comments

@yurigenin
Copy link

What is the best practice for deleting items from the apollo store when a mutation deletes them on the server? I could not find anything related to this in the docs. Do I just have to return ids of deleted items as a result of mutation and then use reducer/updatequeris to manually delete items from the store? Or there is some way to let apollo client handle it automatically? Any help is appreciated.

@richard-uk1
Copy link

richard-uk1 commented Nov 14, 2016

@helfer
Copy link
Contributor

helfer commented Nov 14, 2016

@yurigenin Right now updateQueries or result reducers are the recommended way to remove items from the store. We're open to re-implementing better ways of doing this, so if you have an idea, please share it here!

@yurigenin
Copy link
Author

@helfer @stubailo How do I actually purge the objects from the store? If I use reducers/updateQueries it only removes objects from the lists that contain them. But objects themselves stay in the store and become pretty much orphaned.

@richard-uk1
Copy link

@yurigenin I think at the moment, you'd need to remove them manually, but I may just have missed that functionality when reading through the code

@helfer
Copy link
Contributor

helfer commented Nov 16, 2016

@yurigenin right now there's no cache expiration. Deleting objects manually can be a bit dangerous, because you need to make sure that no other query is using them.

A few people have already asked for this, so here's a discussion around cache expiration: #825

@jfrolich
Copy link

jfrolich commented Dec 6, 2016

It might help to have an example of the updateQueries best practice. Anyone here able to share it, it would be great to have in the documentation, because it is part of any CRUD app :).
How about the option of creating a function that removes an object based on the data-id? In this way I can just call that function after delete to clear it from the local cache, it might even be called automatically if the mutation returns null or so?

@maslianok
Copy link

Yeah, also faced up with this and have no idea how to correctly implement this behavior

@jfrolich
Copy link

jfrolich commented Dec 7, 2016

I think the easiest and most flexible way is to use the reducer function. At least I got it working quite easily using that. Still maybe a bit too much boilerplate for such a common action, but at least it solves it for me.

@TSMMark
Copy link

TSMMark commented Dec 8, 2016

Can someone provide example code of how to use reducer function to remove store data?

@HriBB
Copy link

HriBB commented Dec 8, 2016

Already posted, but check this link http://stackoverflow.com/questions/40484900/how-do-i-handle-deletes-in-react-apollo?answertab=active#tab-top

I also use react-addons-update https://facebook.github.io/react/docs/update.html

npm install --save react-addons-update

Using the following query

const GET_BRANDS = gql`
  query brands {
    brand_categories {
      id title
      brands {
        id title description active pos
      }
    }
  }`

This is my updateQueries function

import update from 'react-addons-update'

export const DELETE_BRAND = gql`
  mutation deleteBrand($id: Int!) {
    deleteBrand(id: $id)
  }`

graphql(DELETE_BRAND, {
  props: ({ ownProps, mutate }) => ({
    deleteBrand: (id) => mutate({
      variables: { id },
      optimisticResponse: {
        deleteBrand: id,
      },
      updateQueries: {
        brands: (prev, { mutationResult }) => {
          // will find category and brand index from the previous list
          let categoryIndex, brandIndex
          prev.brand_categories.forEach((category, cindex) => {
            const bindex = category.brands.findIndex(b => b.id === id)
            if (bindex > -1) {
              categoryIndex = cindex
              brandIndex = bindex
            }
          })
          // then remove brand from the next list
          return update(prev, {
            brand_categories: {
              [categoryIndex]: {
                brands: { $splice: [[brandIndex, 1]] }
              }
            }
          })
        },
      },
    }),
  }),
}),

Side question: is there an easier way to find indexes?

@yurigenin
Copy link
Author

The problem is that this will not delete the brand itself from the store... It only updates the list.

@jfrolich
Copy link

jfrolich commented Dec 9, 2016

Here my version as a reducer. If you use proper ids, you get updates for free, but you need to handle the case of addition and removal from lists.

const withContentItem = graphql(gql`
[YOUR QUERY] 
`, {
  options: props => ({
    variables: { ... },
    reducer: (state, action) => {
      // insert
      if (
        action.type === 'APOLLO_MUTATION_RESULT' /* is it a mutation? */
        && action.operationName === 'mutateElement' /* mutate element? */
        && action.result.data.mutateElement.contentItemId === state.contentItem.id /* operate on right content id? */
        && !state.contentItem.elements.find((el) => el.id === action.result.data.mutateElement.id) /* not already in list */
      ) {
        return update(state, {
          contentItem: {
            elements: {
              $push: [action.result.data.mutateElement] /* add to end of list */
            }
          }
        })
      }
      // delete
      if (
        action.type === 'APOLLO_MUTATION_RESULT'
        && action.operationName === 'deleteElement'
        && action.result.data.deleteElement.contentItemId === state.contentItem.id /* operate on right content id? */
      ) {
        return update(state, {
          contentItem: {
            elements: {
              $splice: [[state.contentItem.elements.findIndex(el => el.id === action.result.data.deleteElement.id), 1]]
            }
          }
        })
      }
      return state
    }
  }),
})

@calebmer calebmer added 📝 documentation 📚 good-first-issue Issues that are suitable for first-time contributors. labels Jan 18, 2017
@nosovsh
Copy link

nosovsh commented Mar 21, 2017

We are missing one big use case when frontend can not determine results of what queries and how were changed after deletion (same applies for creating new entities). Let's say I queried users(age=29, limit=20, offset=10, sort='name').
Then I sent mutation to delete some user. And now I can not use reducer or updateQueries because I don't know HOW should I change results of users query. Only server knows how to filter users based on provided variables.
The only thing I can do is to invalidate somehow cache for all users queries. But as far as I know there is no way to do it in Apollo. Am I right?

@jlovison
Copy link

@nosovsh You can invalidate the entire cache, for everything, on a delete. That's what I ended up going with as a solution, though in my application deletes are a rarity anyways (i.e. perhaps once a week for a user at most).

Yes, it would be really nice if there was a way to invalidate a single item using the object ID the store is using to keep track of it, across all possible queries that would be including it. I tried to suss out exactly how the application is keying the cache for objects, and why all the solutions seem to couple it with the query (it must not be simple key/value, or these suggested workarounds would be total overkill), but after about an hour of digging into the code, I eventually just decided to go with the nuke option.

A clear explanation of the store and cache invalidation strategies would be quite welcome - as someone that isn't particularly familiar with Relay, I feel like most of the examples/discussion/documentation on the Apollo internal cache assumes a strong familiarity with Relay's store, and how it's plugged into Apollo.

For example, it wasn't clear at all to me if I zap a cached item from one query, if it zaps it from other queries that would return it, or if on deletion I need to essentially loop over every possible query that would return it and check if it's in the cache, and if so, zap it for each query. And if zapping it once zaps it everywhere, I'm again confused why I can't just zap it directly using the objectID generated to track it.

@nosovsh
Copy link

nosovsh commented Mar 27, 2017

@jamiter problem with resetStore() is that after you call it all active queries will be immediately refetched. And because Apollo uses recycling of queries a lot of them could be active even if components that were using them already gone. see #462
That can cause enormous network load so I can not use it.

@jamiter
Copy link
Contributor

jamiter commented Mar 27, 2017

@nosovsh, I think you mentioned me by accident.

@nosovsh
Copy link

nosovsh commented Mar 27, 2017

oops, I meant @jlovison

@renato
Copy link

renato commented May 11, 2017

(Please tell me if my question would be better asked in SO instead.)

I was using updateQueries to delete from the store but it only works for the current parameters in a query and I'm deleting data for other cached parameters (ie. I filter by month and I'd like to delete data from multiple months). Anyone knows the recommended way to do this (without refetching)?

Plus, I thought migrating to the new update API could give me more flexibility on this matter, but I couldn't find any doc about deleting from the store when using update. What is the recommended way to do this? Does it support the case I described above?

Thanks!

@helfer
Copy link
Contributor

helfer commented May 12, 2017

@renato with the update function you should be able to write null to the store. This won't actually remove any data, but it will break the links, so to the client it will look like the data isn't there any more and it will refetch.

What's the reason you want to delete from the cache and how important is that feature to you?

@renato
Copy link

renato commented May 15, 2017

@helfer I'm sorry, I'm probably missing something.

I don't really need to "delete from the cache" but this is the way that worked so far for me when deleting something from the view.

I've a list of items in my React view injected by Apollo. When I execute a mutation that deletes one item how do I remove that specific item from my view? So far I've been deleting it from the cache using the updateQueries method.

However, the last query I'm working on has a month parameter and I can navigate through months so they become cached by Apollo. I've a mutation that deletes some items that affects multiple months, not only the currently active parameter. The updateQueries method only affects the active parameter so when I navigate to other months after this mutation, the old cache (before the delete operation) is loaded.

If you don't delete from the cache in these situations, what is the solution? Just invalidate the cache or refetch?

@dcworldwide
Copy link

What's the reason you want to delete from the cache and how important is that feature to you?

To synchronise my view model (mobx) with my model store (apollo)

@chris-guidry
Copy link

chris-guidry commented May 31, 2017

Based on the blog post Apollo Client’s new imperative store API, here's how I setup delete mutations:

this.props.FooDelete({
  variables: {
    input: {
      id: this.props.id
    }
  },
  update: (store, { data: { deleteFoo } }) => { 
    const data = store.readQuery({query: fooQuery() }); // the GraphQL query is returned by the function fooQuery
    const index = data.allFoos.edges.findIndex(edge => edge.node.id === this.props.id);
    if (index > -1) {
      data.allFoos.edges.splice(index, 1);
    }
    store.writeQuery({ query: fooQuery(), data });
  }
})
.then( res => {
  console.log('Foo Deleted.');
});

@derailed
Copy link

I am confused. Most of the sample code I see on this thread, seems to deal with running a delete mutation from a single query. What if I actually have multiple queries referencing that object? I thought the whole point of Apollo tracking refs ie Type-X was to handle such a use case.

For example say user deletes User-1. User-1 is referenced by multiple queries say queryA and queryB. Based on what I see in this thread, it seems it would be the responsibility of the component update to correctly update the store for each of the referencing queries??

I hope I a missing the point here as this would be a maintenance nightmare! So getting back to the original question: What is the best way to handle this use case ie delete the reference and have apollo remove the reference from ALL queries tracking that instance?

@chris-guidry
Copy link

@derailed a few thoughts:

  1. The store is only going to save the user as one item in the cache so even if multiple queries are returning the user, it should only need to be removed once. If you haven't already, try out the Chrome Apollo extension, which will allow you to see how the store is updated post mutation.
  2. If there are items in the store related to the user such as BlogUser, ToDoUser, etc., those would most likely need to be explicitly removed from the store within the "update" section of the mutation.
  3. Generally, mutations are only going to impact a small number of tables, in which case the mutation update of the store makes sense. However, if a particular mutation is going to have wide ranging affects, you could consider resetting the entire store with client.resetStore(). It's documented as part of the login/logout functionality, but it can be used elsewhere.

@derailed
Copy link

derailed commented Jun 15, 2017 via email

@chris-guidry
Copy link

@derailed - in the Apollo Chrome Extension, under the Store tab, the User is not removed after a delete mutation? If you post the code for the delete mutation, then I can give feedback on it.

@derailed
Copy link

@chris-guidry - Tx Chris!

I believe, the user is correctly removed from the query but not from the store. I think I am
not understanding out to actually remove my user from the store. This code works for the
users query. However I have 2 other query referencing that user and the given user still show
in those.

So I think my confusion here is I expect the delete to be propagated to the other queries since
it's a delete mutation, but don't think that's the way things actually work. But perhaps I am
totally missing the point here??

Here is the code

   this.apollo.mutate({
      mutation: DeleteUser,
      variables: deleteInputs(this.user.id),
      // BOZO!! Lame!
      refetchQueries: [{ query: Facilities }, { query: Accounts }, { query: OrphanUsers }],
      optimisticResponse: optimisticDelete(id),
      update: (store: any, data: any) => {
        const cache = store.readQuery({ query: PvdmUsers });
        store.writeQuery({
          query: PvdmUsers,
          data: {
            pvdmUsers: _.filter(cache.pvdmUsers, (u: UserFragment): boolean => { return u.id != id })
          }
        });
      }
    })
      .toPromise()
      .then(() => this.back())
      .catch(error => this.error = error.message);

@MichaelDeBoey
Copy link
Contributor

MichaelDeBoey commented Jun 15, 2017

@derailed You're right that the User is still in the store, but isn't given back when asking for the PvdmUsers query.
Currently there's no way of really deleting an object from the store I think (hence why this thread is still alive).

@ravenscar
Copy link

ravenscar commented Oct 20, 2017

Certainly performing a mutation and then expecting that to be able to correctly update all active queries is ...ambitious.

I'm not sure how many cases it works for, but I think using soft deletes might be a solution for some people.

Imagine a user:

{
   id: 123,
   name: "John Doe",
   email: "john@doe.com",
}

and maybe a message

{
  id: 456,
  text: "a witty comment",
  author {
    id: 123
    ...
   }
}

Then John Doe wants to delete his account so return a mutation with a result e.g.

{
   id: 123,
   name: "[deleted]",
   email: null,
   deleted: true
}

Then it's up to components displaying users to decide if they want to check if deleted is set or not. For instance the posts in a forum may remain with the user name set to [deleted], but sending a private message would not be allowed.

This problem is not limited to deletes, imagine an insert which affects the results of many queries as well.

If you are using react and find yourself in the situation where you are performing a mutation and you know that this is going to possibly affect a number of active queries, on many components, then you may decide to move the network level fetching out of the HOC wrapping your components to somewhere else, and have your HOC have a fetchPolicy of 'cache-only'.

In this case you need to perform the fetching logic somewhere else, good candidates for this are redux-observable or redux-saga.

@jonmanzo
Copy link
Contributor

jonmanzo commented Nov 8, 2017

This is how I handled the use case of removing an obj from multiple caches and unshifting it onto multiple other caches.

the mutation...

import {
  mutationErrorHandler,
  mutationSuccessHandler,
  unshiftItemToQuery,
  removeFromRouteBasedStatusQuery
} from "../../util/mutationHelpers";

const self = "estimateItem";
const selfStatus = "estimated";

const wasSuccessful = mutationResult => {
  const status = mutationResult.data[self].status);
  return status === selfStatus;
};

export const ESTIMATE_ITEM_MUTATION = graphql(ESTIMATE_ITEM_GQL, {
  props: ({ mutate, ownProps }) => ({
    estimateItem: variables => {
      return mutate({
        variables,
        updateQueries: {
          Item: (prev, { mutationResult }) => {
            if (prev.item.id && prev.item.id !== mutationResult.data[self].id)
              return prev;
            return update(prev, {
              item: {
                status: { $set: mutationResult.data[self].status },
                estimated_at: { $set: mutationResult.data[self].estimated_at },
                estimated: { $set: mutationResult.data[self].estimated },
                last_updated: { $set: mutationResult.data[self].last_updated }
              }
            });
          },
          EmployeeItems: (prev, data) =>
            removeFromRouteBasedStatusQuery(
              prev,
              data,
              "EmployeeItems",
              wasSuccessful,
              variables,
              ownProps
            ),
          QueueItems: (prev, data) =>
            removeFromRouteBasedStatusQuery(
              prev,
              data,
              "QueueItems",
              wasSuccessful,
              variables,
              ownProps
            ),
          AllItems: (prev, data) =>
            removeFromRouteBasedStatusQuery(
              prev,
              data,
              "AllItems",
              wasSuccessful,
              variables,
              ownProps
            )
        },
        update: (proxy, mutationResult) => {
          // Employee Items
          unshiftItemToQuery(proxy, {
            mutationResult,
            query: EMPLOYEE_ITEMS_QUERY,
            variables: rootQueryVariables("myItems", {
              employee_id: variables.employee_id || ownProps.employee_id
            }),
            ownProps,
            which: "myItems",
            self
          });

          // All Items
          unshiftItemToQuery(proxy, {
            mutationResult,
            query: ALL_ITEMS_QUERY,
            variables: rootQueryVariables("allItems"),
            ownProps,
            which: "allItems",
            self
          });
        }
      }).then(
        mutationResult =>
          mutationSuccessHandler(
            mutationResult,
            wasSuccessful,
            "Item Estimated",
            ownProps
          ),
        mutationErrorHandler
      );
    }
  })
});

Used in Update Queries...

export const removeItemFromItemsList = (
  previousResult,
  mutationResult,
  objId
) => {
  if (!previousResult) {
    console.log("no previous query result");
    return;
  }
  const index = _.findIndex(previousResult.items.data, v => v.id === objId);
  if (index < 0) {
    console.log("item not found in previous query result");
    return;
  }

  return update(previousResult, {
    items: {
      data: { $splice: [[index, 1]] }
    }
  });
};

export const removeFromRouteBasedStatusQuery = (
  prev,
  data,
  opName,
  wasSuccessful,
  variables,
  ownProps
) => {
  const { mutationResult, queryName } = data;
  if (
    !ownProps.isRouteBasedQuery ||
    ownProps.routeBasedQueryType !== "status" ||
    !wasSuccessful(mutationResult) ||
    queryName !== opName
  ) {
    return prev;
  }
  return removeItemFromItemsList(prev, mutationResult, variables.id);
};

Used in update

export const unshiftItemToQuery = (
  proxy,
  { mutationResult, query, variables = {}, ownProps, which, self }
) => {
  try {
    /**
     * Do not unshift on the currently viewed query, it will get updated, in place by default Apollo Behavior
     */
    if (ownProps.currentRoute === which) {
      return;
    }

    /**
     * Stop writes to EmployeeQuery if it is not this users item
     */
    if (which === "myItems") {
      if (skipMyItemsUpdate({ variables, ownProps })) return;

      // Add this to account for mutation updates that do not provide employee_variables
      if (!variables.employee_id)
        variables.employee_id = ownProps.employee_id;
    }

    const result = mutationResult.data[self];

    if (!result) {
      console.log(`Could not get data.${self} result from %o`, mutationResult);
      return;
    }

    const data = proxy.readQuery({ query, variables });

    if (!data) {
      console.log(
        `Could not find ${which} query to update for ${self} mutation`,
        query,
        variables
      );
      return;
    }

    // If this item exists previously, splice it out
    const existingIndex = _.findIndex(data.items.data, { id: result.id });

    let existing = {};

    if (existingIndex > -1) {
      existing = data.items.data[existingIndex];
      data.items.data.splice(existingIndex, 1);
    }

    data.items.data.unshift({ ...existing, ...result });

    proxy.writeQuery({ query, variables, data });
  } catch (e) {
    console.error(
      `Could not add %o to ${which} query from ${self} mutation`,
      mutationResult
    );
    console.trace(e);
  }
};

I have a removeItemFromQuery helper which is nearly identical to unshiftItemToQuery. Created the helpers and use the "self" const etc. as we have (right now) 20+ mutations this can run on and duplicating this in every mutation was a logistical nightmare. The removals that are happening in the updateQueries call now can (and will) be moved into update and handled the same way... If you're doing infinite scroll or any type of query pagination, connections are a must!

@oliverbenns
Copy link

oliverbenns commented Dec 4, 2017

On our backend, we decided to have a state on our record of active.

We then had some middleware on our backend that, if inactive, we nullified all values (alternatively, you could hard delete the values in the db).

Finally, on our frontend, we then filtered by status of active or disabled a view based on that status.

We didn't want to manually handle the Apollo state in our app so this was our workaround solution.

@miksansegundo
Copy link

I finally found a solution for my case with multiple queries. There is a race condition when using refetchQueries and updateQuery in a mutation . Seems like updateQuery removing the object form the Apollo store needs to be called before refetchQueries.

In my case the I have many different queries (sorting/filtering) objects.

  • latestSongs
  • featuredSongs
  • mostViewedSongs
  • mostRemixedSongs
  • ... and more... generals and specifics per user

The user can access the detail view of an object from any list to remove an object using a mutation. When the object is deleted from database I need to update the detail view to redirect out and also the query results for each list which could contain that object.

I don't want to remove the object manually from each list with update or updateQueries or reset the whole store so my option is to use refetchQueries to get the new lists and updateQuery to remove the object reference from the Apollo store:

  • refetchQueries doesn't work the queries are refetched and the lists returned don't have the removed object but Apollo still grab the data from the store and shows the removed object in the lists
  • updateQuery doesn't work the query Song:id is updated but Apollo doesn't understand that this reference Song:id is in the other queries (the lists)

If I wait for the server response to remove the object reference from the store the queries are refetched but the UI still shows the removed object in the lists.
Updating the object reference after the mutation promise is resolved doesn't work:

    removeSongMutation({id})
       .then(() => updateQuery(() => ({ song: null })))

Updating the object reference before refetchQueries works

    updateQuery(() => ({ song: null })
    removeSongMutation({id})

The UI is updated correctly with the data from the lists refetched (not from the store). This is a kind of optimistic response because I'm updating the reference in the store before the object is removed from database.

"apollo-client": "^2.0.4",
"react-apollo": "^2.0.4",

Hope this helps

@maierson
Copy link

We tried a different way by making use of a query's fetchPolicy.

Goals

  • reduce complexity of refetchQueries by refetching only the currently displayed query(s). In other words only what is currently visible on the page. Queries do not re-fetch at mutation time unless their data is visible to the user. A "dirty" query however refetches from the network on next invocation.
  • reduce the complexity of tracking all queries that need to be re-fetched. Each query should know at fetch time if it needs to re-fetch from the server or it can just hit the cache (although a mutation still needs to know the name of each query that it will impact).

Brief explanation

We have a booksQuery that feeds an infinite scroll of Books. At the same time it can be filtered by the user with various arguments. So invalidating and re-fetching ALL the query / filter combinations can get really expensive.

We looked at a query's fetchPolicy and switching it dynamically between cache-first and network-only. This way we can force a query to refetch from the server if it is dirty.

To achieve this we need a queryCache where to track each query by name and the input filters that it has already fetched. Every time a query runs it gets its fetchPolicy from the queryCache. (we're using typescript but it works just the same in ES6 without the typings).

The sequence is as follows:

  • mutation: invalidate all related queries (by name) on the queryCache
  • query: next time the query runs it gets its fetchPolicy: "network-only" from the queryCache
  • on query completion: clean the queryCache for the query to prevent it from going to the network on every subsequent invocation. This causes the queryCache to return fetchPolicy:"cache-first" on subsequent calls.

Example

Connected component query:

const BookFeedContainer = graphql(
   BookQueries.bookFeed,
   {
      options: (props: IOwnProps) => {
         const bookFilter: InputBookFilter = props.bookFilter;
         
         // BookQueries is a namespace under which we host all 
         // BookQuery graphql related info (fragments, queries, mutations, types etc)
         const queryName = BookQueries.names.bookFeed;
         const inputKey = InputBookFilter.toKey(bookFilter);
         
         // every time the query runs its fetch policy is determined by the queryCache
         const fetchPolicy = queryCache.getFetchPolicy(queryName, inputKey);

         return ({
           variables: {
             bookFilter: props.bookFilter,
           },
           fetchPolicy,
         });
      },
   }
)(BookFeed);

Somewhere else in the app a mutation occurs:

handleClickDeleteBook = () => {
    this.props.deleteBook()
      .then((result: any) => {
        // invalidate (refetch) all related queries if the deletion succeeds
        // this causes the queryCache to provide a "network-only" fetchPolicy
        // the next time the bookFeed query runs
        // NOTE that this does NOT cause the query to be invoked right away
        // if you need the query update to run immediately you must still call updateQueries
        // we do that on queries that are visible on the page
        queryCache.refetch([BookQueries.names.bookFeed]);

        // ... rest of code
      })
      .catch(err => {
        console.error(getGraphQLErrorMessage(err));
  }

Query cleanup - most likely in componentWillReceiveProps(nextProps) once the query has finished executing

queryCache.clean(BookQueries.names.bookFeed);

Query Filtering

Each query has a list of input ids (inputKey in our case) keyed in on the query cache by query name:

queryCache = {
  [queryName:string]: Array<inputKey:string>
}

If an input filter's key is in the query's cached list then the query / inputFilter combination is "clean" and can be fetched from the cache (with fallback to network = default apollo config). If the input filter's key is NOT in the list or the list is missing from the cache then the query / filter combination is dirty and it must be fetched from the network. fetchPolicy: network-only.

This design allows to mark all query / filter combinations as dirty (either on a delete or add op) by just deleting the query's list of input keys (note that I do not need to know all the filter keys that the user might have accessed):

import queryCache from "query-cache"; // this is a singleton

const queryName1 = "BookQueries_bookFeed"
const queryName2 = "SomeOtherRelatedQuery_name"

// this invalidates each queryName's list on the cache thus forcing 
// ALL combinations of each query to be refetched from the network on next call
queryCache.refetch([queryName1, queryName2]); 

Then when executing the query I can retrieve the fetch policy directly from the queryCache:

const fetchPolicy1 = queryCache.getFetchPolicy(queryName1, inputKey); // cache-first or network-only

// or without filtering
const fetchPolicy2 = queryCache.getFetchPolicy(queryName2); 

Detailed code

queryCache.ts:

/**
 * Hash map to keep track of all queries executed so far. Each query
 * can have multiple input filters. When the query is invalidated (isDirty)
 * then the cache policy defaults to "network-only". Else it's "cache-first".
 * Each time an input is fetched its key is placed into the inputKeys list
 * for the specific query. Remove its key from the list to force it to be
 * loaded from the network the next time
 */
class QueryCache {
  private cache = {};

  /**
   * Each query has a list keyed in by the query name. The list contains
   * the keys for all the inputs that have been previously fetched
   * and are "clean" (they can be safely retrieved from the cache). If an
   * input's key is not in the list then it must be refetched from the network.
   * If the list is missing from the caceh then any variation of the query
   * (or a query without input params) must be refetched from the network.
   *
   * @param {string} queryName - The graphql name of the query being targeted.
   */
  private getQueryList = (queryName: string) => this.cache[queryName];

  /**
   * Checks whether a specific input key for a given query should be
   * re-fetched. Only inputs that are already in the inputKeys are "clean".
   *
   * @param {string} queryName - graphql name of the query being targeted.
   * @param {string | undefined} inputKey - key uniquely identifying
   * the query's input parameters.
   */
  private isDirty = (queryName: string, inputKey?: string) => {
    const queryList = this.getQueryList(queryName);
    // if no query list on the cache then query is dirty
    // re-fetch from the network
    if (!queryList) {
      return true;
    }
    return inputKey ? this.getQueryList(queryName).indexOf(inputKey) < 0 : false;
  };

  /**
   * Invalidates the query's entire input list. All versions of the query regardless
   * of input will be re-fetched from the network.
   *
   * @param {Array<string>} queryNames - list of names for all queries that should
   * be re-fetched from the network on the next call.
   */
  refetch = (queryNames: Array<string> = []) => {
    queryNames.forEach((name) => { delete this.cache[name] });
  }

  /**
   * Releases this query / filter(s) combination from fetch network-only constraints.
   *
   * @param {string} queryName - graphql name of the query being released
   * from network-only constraints.
   * @param {string | undefined } inputKey - key uniquely identifying
   * the query's input parameters.
   */
  clean = (queryName: string, inputKey?: string) => {
    const queryList = this.getQueryList(queryName) || [];
    queryList.push(inputKey);
    if (!this.cache[queryName]) {
      this.cache[queryName] = queryList;
    }
  }

  /**
   * Provide the appropriate fetch policy according to the cache's status.
   *
   * @param {string} queryName - graphql name of the query for which the
   * fetch policy is being retrieved.
   * @param {string | undefined } inputKey - key uniquely identifying
   * the query's input parameters.
   */
  getFetchPolicy = (queryName: string, inputKey?: string) => (
    this.isDirty(queryName, inputKey) ? "network-only" : "cache-first"
  )
}

const queryCache = new QueryCache();
export default queryCache; // singleton

bookQueries.ts (simplified - in reality we use typescript namespaces):

export const BookQueries = {
    bookFeed: gql`
      query BookQueries_bookFeed($bookFilter: InputBookFilter!, $cursor:String){
        bookFeed(bookFilter:$bookFilter, cursor:$cursor)
        @connection(key: "bookFeed", filter:["bookFilter"]) {
          list {
            id
          }
          cursor
          hasNext
        }
      }
    `,
    names: {
      bookFeed: "BookQueries_bookFeed",
    }
  };

InputBookFilter.ts (also simplified):

export default class InputBookFilter {
   private privacy: boolean

   constructor(privacy:string = "private"){
     this.privacy = privacy;
   }

   /**
    * All this has to do is provide a unique key given the input parameters.
    */
   static toKey = (filter: InputBookFilter) => `inputBookFilter_${filter.privacy}`;
}

We haven't yet explored all the edge cases but on the first immediate try this works really well for us.

@coodoo
Copy link

coodoo commented Feb 23, 2018

@maierson thanks for sharing your approach, it did sound very intriguing, just wondering have you discovered any drawbacks or hard-to-handle edge cases later?

@maierson
Copy link

@coodoo so far it works really well for us. If I run into anything meaningful I'll make sure to update this thread.

The next challenge I see is how to handle a delete / creation in a list where a user has already downloaded a very long list of (paged) items. Do we re-fetch the entire list or just part of it based on some form of cursor data?

For deletes: this is where I feel it would be useful to be able to evict an item from the apollo cache and have all queries containing it update locally to reflect the removal. Having some experience with writing a client side cache I am aware of the complexities and risks of inconsistency that this can introduce in the data. However it's a very high value add if implemented properly.

@canercandan
Copy link
Contributor

canercandan commented Mar 2, 2018

In prisma they support the transactional mutations concept, basically cascading deletes with the directive (@relation(…, onDelete: CASCADE)).

Maybe apollo should retrieve those logics and apply those rules for the cache so whenever a node is deleted from the cache (using options.update(), its relations based on those transactional rules will be deleted as well.

Obviously this could only work with Prisma in the first place, but it can open a chapter to support other backend framework and database (at the end Prisma uses MySQL and cascading deletes is nothing unfamiliar for most of the DB engines)

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 17, 2018

The store is only going to save the user as one item in the cache so even if multiple queries are returning the user, it should only need to be removed once.

This is actually not true. If there are multiple queries, with different parameters, that return a list of items, each one will have a different list of result item IDs in the cache, because the client has no way of knowing which items are included for a given set of parameters. This is why updates to fields within those items magically work but deletes don't, because the id of the deleted item has to be removed from each of those lists.

But if Apollo merely provided a way to Mark a given typename/id pair as invalidated, it could automatically refetches any queries including that item in their result set.

I have started using helper functions to look through apolloClient.queryManager.queries, checking if a given field is present in the GraphQL selectionSet and if so, refetches the query. But since these are only active queries, I have to use cache-and-network to ensure that any unmounted components refetch the latest data when they remount.

Next I plan to fetch enough type metadata from the backend to be able to scan all levels of the active queries for the deleted item and refetch any that contain it.

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 18, 2018

I just made a library to magically refetch all relevant active queries after creates and deletes! It still has some limitations I need to work through, and I need to add some optimizations, but check it out:
https://github.com/jcoreio/apollo-magic-refetch

@jedwards1211
Copy link
Contributor

I just realized that updating the cache after associating/dissociating objects can also be well served by apollo-magic-refetch, so I added some features and examples to support that case as well.

@ravenscar
Copy link

@jedwards1211 this is a neat idea but would be very chatty and data intensive if there are a lot of updates. We have a chat app that has conversations and it receives messages via subscriptions, if it were to reload all the messages of a conversation every time a new message was added then it would use a lot of data for mobile users, and not be that responsive as it adds another network call when the data is already in the store.

We first solved this using redux-observable and keeping a watchQuery subscribed for each active conversation then updating the watchQuery when a relevant message came in via subscription.

A new approach we are trying is to use our own cache to replace 'apollo-cache-inmemory' which emits a stream of events when new data is written, or update, to the normalised cache. We then have cacheWatchers for queries which might care about the data, and they can update the cache via readQuery() and writeQuery().

The goal is to have data changes automatically update relevant queries that are saved in the cache, without network fetching.

If anyone is interested in this I can share the code but it's at an early stage.

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 19, 2018

@ravenscar Well I'm not saying you should use it for any and every use case, and obviously it doesn't make sense for that case. If it doesn't seem useful for anything else in your app, then your app probably just has different needs than mine.

It sounds like you would agree with me that custom update logic belongs on the Query components more often than on the Mutation components, right?

I'm a bit confused how the custom cache approach works in your new message example. When a new message arrives via subscription it gets written to the cache, and then a listener picks that up?
And if a message were deleted, you would delete the message:id cache key and then listeners would pick that up and remove a reference to that key from their lists of results?

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 19, 2018

If I were to go full on in the opposite direction from refetching, I would probably just design a system to listen to all the relevant events for a given query on my database and send events to the client specific to each query telling it what to add/remove/change in its result set, rather that trying to write a lot of custom update logic on the client. My data is very relational, so the number of update cases I would have to handle for each query would break my brain.

@jedwards1211
Copy link
Contributor

What I really wish I could do is ask the InMemoryCache for all cached items of a given __typename, and then update them. If that were the case, when I delete an Item, I could just go through all ItemConnections in my cache and remove the edge for that Item.

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 19, 2018

@ravenscar I should also clarify what I'm using the magic refetching for right now in my app: whenever a user creates or deletes an organization, another user, a user group, a device, a device group, etc. or changes associations between them.

These are infrequent operations so the extra refetching is a small price to pay for a strong guarantee of correct updates without handling numerous cases in custom update logic for each of these types in my schema. Additionally, only one page of results shows onscreen at a time for lists of any of these types, so the amount of data to refetch is well bounded (in general it's good idea to avoid having an unbounded amount of data rendered into the DOM anyway). In these cases the refetch approach really shines.

In frequent update cases like messages, you're still better off doing something custom.

@ravenscar
Copy link

@jedwards1211 Hey I'm not saying what you've done is isn't useful - sorry if it came across that way! We did try this initially (refetching active queries) but for us it was too data heavy.

It sounds like you would agree with me that custom update logic belongs on the Query components more often than on the Mutation components, right?

I do agree that the logic belongs with the Query and not the Mutation - the update logic in the mutation is ass-backwards in my opinion.

It would be nice if apollo queries had something like reducers which worked on the updates that hit the store, but unfortunately that would probably require some kind of normalisation and I don't think this is a prerequisite for the store (even though that's exactly how it works).

When a new message arrives via subscription it gets written to the cache, and then a listener picks that up?

We have a way of detecting changes in the cache and it's exposed as an rxjs observable. This can be subscribed to by many observers which often manage the state of queries, and they add or remove the changed entity from collections.

If you use redux this is somewhat similar to a reducer where the state would be the result of the query and the actions would be the entity changes.

We use a CQRS paradigm not CRUD so it's a bit easier to detect changes, but I don't think that really matters too much.

What I really wish I could do is ask the InMemoryCache for all cached items of a given __typename, and then update them.

It would be really nice if readFragment supported this but I suppose it's near impossible unless you have a normalised cache. If you are using apollo-cache-inmemory, which uses a normalised cache, you could write your own version of it and add that functionality quite easily.

Basically roll your own cache like this and track __typenames

When you create the InMemoryCache instance supply a storeFactory in the config which produces your cache.

@jedwards1211
Copy link
Contributor

I get that you're detecting changes to the cache, but what initial change do you make when an object gets deleted? Do you set something to null?

Also if you have plans of making this a general purpose lib it might be better to use zen-observable since Apollo is using it, though that's just my opinion.

@ravenscar
Copy link

If you want to delete it from the store you can call delete() with the object id. Of course you would have to have logic to remove it from all the collections as well - which is a pain.

By far the easiest approach for us has been to have a field on the object called "deleted" then when you delete an object just send it through as deleted : true and everything will update. You need to write your client side container components with this in mind though, so it's definitely no silver bullet.

This all falls into cache invalidation side of the two hard things of computer science and it's seems really difficult (perhaps impossible?) to find a generalised solution. Certainly I haven't seen any solution people are actually happy with with respect to apollo's cache.

@YCMitch
Copy link

YCMitch commented Jul 26, 2018

Sorry to chime in so late - I've just run across the same issues as the rest of you.

Now, just to confirm for my own sanity - when deleting an object, you need to find every single reference of it in the client and remove them all manually?

Is that not kind of a huge step back from Redux + Normalizr? At least in that case all your entities are in the same place. Feels a bit like jQuery 😄

@jedwards1211
Copy link
Contributor

jedwards1211 commented Jul 26, 2018

Normalizr provides a way to automatically delete all references to an object without violating the types you've defined?
I mean naively one could think that Apollo could just remove references from all lists. But if you're using Relay Connections, the reference to an object is not inside a simple list, and an ancestor rather than the reference itself needs to be removed.
Likewise if you have an object type Foo that has one required Bar, and its Bar gets deleted, what do you do? You can't set its Bar reference to null, that would violate type expectations. You could delete Foo, but then you've kinda got the same problem as before if anything references Foo...

@jedwards1211
Copy link
Contributor

@MitchEff In normalizr gaearon recommends using a flag to indicate deleted: https://github.com/paularmstrong/normalizr/issues/21
People have also been recommending that here for Apollo.
How did you personally handle deletions in normalizr?

@YCMitch
Copy link

YCMitch commented Jul 27, 2018

I did mine a kind of funny way, but takes advantage of the fact that every entity is just a key in your state. In my 'deleteEntity' action, I pass in type, entity and parentType. I can then go:

let parent = clonedEntityState[parentType+'s'][entity[parentType+'_id'];

remove(parent[type+'s'], entityID => {
    return entityID === entity.id;
});

It's a bit simplified above, but you see what I mean. It's not perfect, but does 99% of the work in just a couple lines. I don't think you can achieve something quite as neat in GraphQL - you'd need to find every query that mentions the entity's parent and remove it manually, right?

@hwillson
Copy link
Member

To help provide a more clear separation between feature requests / discussions and bugs, and to help clean up the feature request / discussion backlog, Apollo Client feature requests / discussions are now being managed under the https://github.com/apollographql/apollo-feature-requests repository.

This issue has been migrated to: apollographql/apollo-feature-requests#5

@apollographql apollographql locked and limited conversation to collaborators Jul 27, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
📚 good-first-issue Issues that are suitable for first-time contributors. 📝 documentation
Projects
None yet
Development

No branches or pull requests