Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jank due to DeepCollectionEquality inside ferry_cache #394

Closed
fabiancrx opened this issue Aug 30, 2022 · 10 comments · Fixed by #411
Closed

Jank due to DeepCollectionEquality inside ferry_cache #394

fabiancrx opened this issue Aug 30, 2022 · 10 comments · Fixed by #411
Labels
enhancement New feature or request

Comments

@fabiancrx
Copy link

While debugging I found out that the majority of the CPU time was spent

(prev, next) => const DeepCollectionEquality().equals(
.
It seems to be somewhat related to zino-hofmann/graphql-flutter#1196 . One of the potential workarounds is to bypass the cache altogether zino-hofmann/graphql-flutter#1196 (comment) setting alwaysRebroadcast: true in the client. Because the comparison is taking a lot of time.

  1. Does the ferry client offers a similar capability to always broadcast changes and drop the comparison.
  2. Is there a way to bypass the cache if not for all operations at least for specific ones, bypassing both the cache and comparison overheads.
@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Aug 31, 2022

Hi!

Yes, this is an issue that I ran into myself.
No, at the moment ferry does not offer anything in this regard, unfortunately.

But it's on my list of things to tackle.

I want to look for potential performance improvements to make this less harmful in general, add escape hatches like alwaysRebroadcast and maybe, in long term, move the whole client to an isolate to avoid jank on heavy calculations.

The timeframe for this is weeks though, so for the short term, you probably will have to find some workaround.
You can set the FetchPolicy to .NoCache for specific requests though (it's a parameter of the G...Req object builder)

@knaeckeKami knaeckeKami added the enhancement New feature or request label Aug 31, 2022
@fabiancrx
Copy link
Author

fabiancrx commented Aug 31, 2022

Hi @knaeckeKami I totally missed the FetchPolicy.NoCache , although when testing it did improved a little bit the jank I was having it did not got rid entirely of the time spent at DeepCollectionEquality . I was testing with ferry_cache and saw that tweaking fragmentDataChangeStream and operationDataChangeStream, and dropping comparison of equality did solved my jank issue.
The changed code went from

Stream<Set<String>> operationDataChangeStream<TData, TVars>(
...
    return stream
        .distinct(
          (prev, next) => const DeepCollectionEquality().equals(
            prev,
            next,
          ),
        )
        .doOnData((_) => changed.add(dataId));

To :

    return stream.doOnData((_) => changed.add(dataId));

As of now I have a fork of the ferry_cache package.
Are these changes too naive or would it be helpful if a made a PR to make something like the alwaysRebroadcast: true that I was commenting earlier.

@knaeckeKami
Copy link
Collaborator

You can use your fork for now. I will do some more research if it's safe to remove the .distinct() call without impact on other features like optimistic patches etc.

@knaeckeKami
Copy link
Collaborator

knaeckeKami commented Sep 3, 2022

I am working on a workaround for this issue on feature/cache-deduplication-strategy.

It's very much experimental right now and if this gets merged into stable, the naming will likely change.

The changes are in the packages ferry_exec, ferry_cache and and ferry_generator. (you need to rerun code gen)

You can control the behaviour by setting the cacheDeduplicationStrategy on your Requests or setting the defaultDeduplicationStrategy on the cache.

.none would be equivalent on what you did, don't filter any updates but cause a lot of unnecessary updates, especially when using optimistic updates
.onNormalizedObjects is the default behaviour which is implemented now: watch every single data id and compare the data on the identifiable objects, when any content changes, build a new response
.afterDenormalize filters stream updates after building the response data.

If you want, you can try it out :). I would be curious if .afterDenormalize also fixes your jank issues.

@fabiancrx
Copy link
Author

Ok, thanks for the quick action! I'll give it a try and will give you some feedback on the performance

@knaeckeKami
Copy link
Collaborator

Cool. I added some additional changes just now (debouncing so every writeQuery/writeFragment only causes one update when the using CacheDeduplicationStrategy.none instead of potentially one update for each identifiable object)

@fabiancrx
Copy link
Author

Hi @knaeckeKami , oddly enough when I override the dependencies for ferry_exec, ferry_cache, ferry_generator to use the branch you commented the jank and stutter disappears. That's without even setting the cacheDeduplicationStrategy and with every one of its possible values being tested in the Cache(). I've pasted a fragment of the pubspec.lock of my initial dependencies in which I had the jank issue in case it may serve. As for now I intend to use the branch you created, as it works correctly without any performance nor functionality degradation.

Fragment of pubspec.lock
  ferry:
    dependency: transitive
    description:
      name: ferry
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.10.5"
  ferry_cache:
    dependency: transitive
    description:
      name: ferry_cache
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.6.1+1"
  ferry_exec:
    dependency: "direct overridden"
    description:
      name: ferry_exec
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.2.0-dev.0"
  ferry_flutter:
    dependency: "direct main"
    description:
      name: ferry_flutter
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.6.0-dev.0"
  ferry_generator:
    dependency: "direct dev"
    description:
      name: ferry_generator
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.6.0"
  ferry_hive_store:
    dependency: "direct main"
    description:
      name: ferry_hive_store
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.4.4"
  ferry_store:
    dependency: transitive
    description:
      name: ferry_store
      url: "https://pub.dartlang.org"
    source: hosted
    version: "0.4.5"

@knaeckeKami
Copy link
Collaborator

Good to hear!
Yes, with the general improvements there might not even be a need to introduce the cacheDeduplicationStrategy parameter.

Just so you know: there is an issue on unreleased versions, including the cacheDeduplicationStrategy branch, that might cause a watched stream not to update correctly in some edge cases. It will soon be fixed by #403.

@fabiancrx
Copy link
Author

As far as I know #403 was just merged, and the initial jank I reported is non existent in the latest branches of the project. I think this issue can be closed as soon as a new version is published to pub.

@knaeckeKami
Copy link
Collaborator

Yes, I just merged it, and I will work on getting this over the finish line next.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants