New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix (performance): #3606 Add cache for pocket stories and topics #3654

Merged
merged 3 commits into from Oct 13, 2017

Conversation

Projects
None yet
5 participants
@rlr
Member

rlr commented Oct 5, 2017

This a file on disk as a persistent cache.

Fixes #3606

@rlr rlr requested a review from Mardak Oct 5, 2017

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 5, 2017

Member

Some initial testing, here's number of frames (30fps) from first paint to 1) search box 2) strings 3) topics/stories (+relative times for 2&3 from 1):

before: 10, 14, 18/18 ( +4,  +8/ +8)
before: 14, 21, 25/25 ( +7, +11/+11)
before: 13, 17, 20/23 ( +4,  +7/+10)
 after: 12, 17, 17/17 ( +5,  +5/ +5)
 after: 14, 19, 19/19 ( +5,  +5/ +5)
 after: 13, 23, 23/23 (+10, +10/+10)

If we just take the median times, it looks like:

before: 13, 17, 21/23 ( +4,  +8/+10)
 after: 13, 18, 18/18 ( +5,  +5/ +5)

So at least on my machine, with this fix, topics and stories show up at the same time as strings with caching being ~100ms/167ms faster than network. Unclear if the slightly slower strings with this caching is just noise or related.

Member

Mardak commented Oct 5, 2017

Some initial testing, here's number of frames (30fps) from first paint to 1) search box 2) strings 3) topics/stories (+relative times for 2&3 from 1):

before: 10, 14, 18/18 ( +4,  +8/ +8)
before: 14, 21, 25/25 ( +7, +11/+11)
before: 13, 17, 20/23 ( +4,  +7/+10)
 after: 12, 17, 17/17 ( +5,  +5/ +5)
 after: 14, 19, 19/19 ( +5,  +5/ +5)
 after: 13, 23, 23/23 (+10, +10/+10)

If we just take the median times, it looks like:

before: 13, 17, 21/23 ( +4,  +8/+10)
 after: 13, 18, 18/18 ( +5,  +5/ +5)

So at least on my machine, with this fix, topics and stories show up at the same time as strings with caching being ~100ms/167ms faster than network. Unclear if the slightly slower strings with this caching is just noise or related.

@Mardak

Overall things look good from a timing/performance perspective. We should fix up the Prefs usage and wait on @csadilek for final review of the behavior changes.

Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated

@Mardak Mardak requested a review from csadilek Oct 5, 2017

@Mardak Mardak assigned rlr and csadilek and unassigned Mardak Oct 5, 2017

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 5, 2017

Member

Oh. One thing about using prefs is that there is a max size. I just checked the cache and it's ~13KB. @sarracini do you remember where things go bad? @csadilek any idea how big we should expect the response to get?

Member

Mardak commented Oct 5, 2017

Oh. One thing about using prefs is that there is a max size. I just checked the cache and it's ~13KB. @sarracini do you remember where things go bad? @csadilek any idea how big we should expect the response to get?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 5, 2017

Member

Actually, on second thought.. 13KB string pref might be bad for other Firefox performance that uses prefs. On a new profile with this caching, the pref file is 22KB total, so we're more than half here…

@k88hudson any suggestions on some lightweight caching? I suppose the usual thing is write JSON to a file in the profile / cache directory…?

Member

Mardak commented Oct 5, 2017

Actually, on second thought.. 13KB string pref might be bad for other Firefox performance that uses prefs. On a new profile with this caching, the pref file is 22KB total, so we're more than half here…

@k88hudson any suggestions on some lightweight caching? I suppose the usual thing is write JSON to a file in the profile / cache directory…?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 5, 2017

Member

Tiles used this to write to the Local/Cache profile directory:
https://searchfox.org/mozilla-central/source/browser/modules/DirectoryLinksProvider.jsm#330-338

Could even read out the file timestamp for lastUpdated.

Member

Mardak commented Oct 5, 2017

Tiles used this to write to the Local/Cache profile directory:
https://searchfox.org/mozilla-central/source/browser/modules/DirectoryLinksProvider.jsm#330-338

Could even read out the file timestamp for lastUpdated.

@csadilek

This comment has been minimized.

Show comment
Hide comment
@csadilek

csadilek Oct 5, 2017

Collaborator

@Mardak response size will be 15KB and more (without images). We're currently fetching 20 stories but that could likely increase over time i.e. if personalization is successful and we want a bigger client-side selection.

Collaborator

csadilek commented Oct 5, 2017

@Mardak response size will be 15KB and more (without images). We're currently fetching 20 stories but that could likely increase over time i.e. if personalization is successful and we want a bigger client-side selection.

@csadilek

Thanks, looks good! Had two comments inline about not using this.stories, and cache expiry (we should verify with design if showing old stories is OK).

@k88hudson

This comment has been minimized.

Show comment
Hide comment
@k88hudson

k88hudson Oct 5, 2017

Member

Your options are probably either write to indexedDB or write to a json file in the profile directory. I think the json file approach will probably be easier and it's not like a lot of frequent reads/writes will be happening anway 👍

Member

k88hudson commented Oct 5, 2017

Your options are probably either write to indexedDB or write to a json file in the profile directory. I think the json file approach will probably be easier and it's not like a lot of frequent reads/writes will be happening anway 👍

@k88hudson

This comment has been minimized.

Show comment
Hide comment
@k88hudson

k88hudson Oct 5, 2017

Member

Also, you probably want to store a fixed version number (just like 1 is fine) or something that would tell you to not use the cached data if you land breaking changes to the data format.

We should also have some kind of max age or expiration timing on start-up, to prevent very old stories from being shown.

Member

k88hudson commented Oct 5, 2017

Also, you probably want to store a fixed version number (just like 1 is fine) or something that would tell you to not use the cached data if you land breaking changes to the data format.

We should also have some kind of max age or expiration timing on start-up, to prevent very old stories from being shown.

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 5, 2017

Member

Alternatively, punt on versioning and other metadata and when we have a new version, if it lacks a version, it's version 1! ;)

Member

Mardak commented Oct 5, 2017

Alternatively, punt on versioning and other metadata and when we have a new version, if it lacks a version, it's version 1! ;)

@csadilek

This comment has been minimized.

Show comment
Hide comment
@csadilek

csadilek Oct 6, 2017

Collaborator

@rlr @Mardak @k88hudson another edge case here is that when a story is dismissed the cache needs to be invalidated, otherwise a dismissed story could show up again after a restart. My biggest concern is about stale content though. Maybe we should talk about this before landing?

Collaborator

csadilek commented Oct 6, 2017

@rlr @Mardak @k88hudson another edge case here is that when a story is dismissed the cache needs to be invalidated, otherwise a dismissed story could show up again after a restart. My biggest concern is about stale content though. Maybe we should talk about this before landing?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 6, 2017

Member

A dismissed story is blocked, so wouldn't it just need to filter/transform the items before dispatching?

Member

Mardak commented Oct 6, 2017

A dismissed story is blocked, so wouldn't it just need to filter/transform the items before dispatching?

@csadilek

This comment has been minimized.

Show comment
Hide comment
@csadilek

csadilek Oct 6, 2017

Collaborator

Yes, we could filter after reading from cache. Move the filter logic out of transform to make it reusable.

Collaborator

csadilek commented Oct 6, 2017

Yes, we could filter after reading from cache. Move the filter logic out of transform to make it reusable.

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 6, 2017

Member

@Mardak That last commit ^ changes the "cache" to be a file. I still need to work on tests but maybe you can run the little perf test on it to see how it compares? Or I'm happy to do that if you show me how 😄

Member

rlr commented Oct 6, 2017

@Mardak That last commit ^ changes the "cache" to be a file. I still need to work on tests but maybe you can run the little perf test on it to see how it compares? Or I'm happy to do that if you show me how 😄

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 7, 2017

Member

I use ScreenFlow to record my screen and just note the frame count (30fps by default) from various things appearing. Here's the number of frames as from earlier comment:

before: 11, 16, 23/23 (+5, +12/+12)
before: 10, 16, 23/23 (+6, +13/+13)
before: 11, 18, 23/23 (+7, +12/+12)
after:  13, 19, 24/24 (+6, +11/+11)
after:  14, 22, 23/23 (+8,  +9/ +9)
after:  14, 19, 25/25 (+5, +11/+11)

I'm on a different network from earlier, but initial few runs seem to have reading separate files as slower than from prefs… I'll try measuring again later tonight.

Member

Mardak commented Oct 7, 2017

I use ScreenFlow to record my screen and just note the frame count (30fps by default) from various things appearing. Here's the number of frames as from earlier comment:

before: 11, 16, 23/23 (+5, +12/+12)
before: 10, 16, 23/23 (+6, +13/+13)
before: 11, 18, 23/23 (+7, +12/+12)
after:  13, 19, 24/24 (+6, +11/+11)
after:  14, 22, 23/23 (+8,  +9/ +9)
after:  14, 19, 25/25 (+5, +11/+11)

I'm on a different network from earlier, but initial few runs seem to have reading separate files as slower than from prefs… I'll try measuring again later tonight.

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 7, 2017

Member
orig: 11, 16, 19/19
orig: 10, 16, 20/20
orig: 12, 19, 22/22
orig: 10, 15, 19/19
orig: 11, 17, 20/20
file: 12, 17, 21/21
file: 11, 16, 20/21
file: 10, 16, 19/19
file: 11, 17, 21/21
file: 12, 19, 21/21
pref: 11, 18, 22/22
pref: 14, 21, 24/21
pref: 13, 18, 22/22
pref: 12, 17, 21/20
pref: 12, 17, 22/22

Well now hrmm.. median frames to stories/topics:
orig: 20/20
file: 21/21
pref: 22/22
… ?
Edit: Nevermind the pref ones. I checked out the getState/SetPref commit but it wasn't actually setting the pref because the value needs to be stringified before writing.

Member

Mardak commented Oct 7, 2017

orig: 11, 16, 19/19
orig: 10, 16, 20/20
orig: 12, 19, 22/22
orig: 10, 15, 19/19
orig: 11, 17, 20/20
file: 12, 17, 21/21
file: 11, 16, 20/21
file: 10, 16, 19/19
file: 11, 17, 21/21
file: 12, 19, 21/21
pref: 11, 18, 22/22
pref: 14, 21, 24/21
pref: 13, 18, 22/22
pref: 12, 17, 21/20
pref: 12, 17, 22/22

Well now hrmm.. median frames to stories/topics:
orig: 20/20
file: 21/21
pref: 22/22
… ?
Edit: Nevermind the pref ones. I checked out the getState/SetPref commit but it wasn't actually setting the pref because the value needs to be stringified before writing.

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 7, 2017

Member

Testing with actual pref caching (see previous comment edit) and slow network:

file: 12, 19, 21/22
file: 13, 17, 21/22
file: 12, 20, 21/21
pref: 13, 22, 22/22
pref: 10, 19, 19/19
pref: 11, 19, 19/19
orig: 12, 16, 57/23 (topics showed up over a second earlier than stories)
orig: 14, 20, 50/51
orig: 14, 22, 59/59

So yes, for those who have slow network, caching definitely helps whether as a file or as pref. I suppose one optimization is to store stories and topics together in a single file.

Member

Mardak commented Oct 7, 2017

Testing with actual pref caching (see previous comment edit) and slow network:

file: 12, 19, 21/22
file: 13, 17, 21/22
file: 12, 20, 21/21
pref: 13, 22, 22/22
pref: 10, 19, 19/19
pref: 11, 19, 19/19
orig: 12, 16, 57/23 (topics showed up over a second earlier than stories)
orig: 14, 20, 50/51
orig: 14, 22, 59/59

So yes, for those who have slow network, caching definitely helps whether as a file or as pref. I suppose one optimization is to store stories and topics together in a single file.

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 9, 2017

Member

^ That last commit makes it one file. That makes it more complicated because you have to read before you write unless we keep around copies of the latest stories and topics? As it is now, there is also a race condition. It ends up calling saveToFile({topics}) in parallel with saveToFile({stories}) and one of the saves can get lost basically. So... if it isn't much faster then I don't think it's worth it. But if it is, we can fix the bugs.

Member

rlr commented Oct 9, 2017

^ That last commit makes it one file. That makes it more complicated because you have to read before you write unless we keep around copies of the latest stories and topics? As it is now, there is also a race condition. It ends up calling saveToFile({topics}) in parallel with saveToFile({stories}) and one of the saves can get lost basically. So... if it isn't much faster then I don't think it's worth it. But if it is, we can fix the bugs.

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 9, 2017

Member

The concurrent saves should be relatively simple to fix either:

  • keep an in-memory cache (seems a little bit dangerous if someone attempts to directly touch those values)
  • await on any pending saves to serialize saving (although need to be careful of multiple pending saves thinking they're safe to go when it 1st resolve, e.g., 3 concurrent, 1st finishes and both 2nd + 3rd end up concurrent instead of serial)
Member

Mardak commented Oct 9, 2017

The concurrent saves should be relatively simple to fix either:

  • keep an in-memory cache (seems a little bit dangerous if someone attempts to directly touch those values)
  • await on any pending saves to serialize saving (although need to be careful of multiple pending saves thinking they're safe to go when it 1st resolve, e.g., 3 concurrent, 1st finishes and both 2nd + 3rd end up concurrent instead of serial)
@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 9, 2017

Member

Ha ha ha.. here's the "simple" await mutex:

slow = () => new Promise(resolve => setTimeout(resolve, 1000));
mutex = null;
file = "file: ";
save = async v => {
  console.log("saving", v, file);
  while (mutex) {
    await mutex;
  }
  console.log("grabbing mutex", v);
  mutex = new Promise(async resolve => {
    console.log("grabbed mutex", v);
    let data = file;
    await slow();
    data += v;
    file = data;
    console.log("saved", v, file);
    mutex = null;
    console.log("released mutex", v);
    resolve();
  });
};
save(1); save(2); save(3);

Should print:

saving 1 file:
grabbing mutex 1
grabbed mutex 1
saving 2 file:
saving 3 file:
saved 1 file: 1
released mutex 1
grabbing mutex 2
grabbed mutex 2
saved 2 file: 12
released mutex 2
grabbing mutex 3
grabbed mutex 3
saved 3 file: 123
released mutex 3
Member

Mardak commented Oct 9, 2017

Ha ha ha.. here's the "simple" await mutex:

slow = () => new Promise(resolve => setTimeout(resolve, 1000));
mutex = null;
file = "file: ";
save = async v => {
  console.log("saving", v, file);
  while (mutex) {
    await mutex;
  }
  console.log("grabbing mutex", v);
  mutex = new Promise(async resolve => {
    console.log("grabbed mutex", v);
    let data = file;
    await slow();
    data += v;
    file = data;
    console.log("saved", v, file);
    mutex = null;
    console.log("released mutex", v);
    resolve();
  });
};
save(1); save(2); save(3);

Should print:

saving 1 file:
grabbing mutex 1
grabbed mutex 1
saving 2 file:
saving 3 file:
saved 1 file: 1
released mutex 1
grabbing mutex 2
grabbed mutex 2
saved 2 file: 12
released mutex 2
grabbing mutex 3
grabbed mutex 3
saved 3 file: 123
released mutex 3
@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 9, 2017

Member

Here's a mutex/lock wrapper that keeps the mutex logic separate from the save logic:

slow = () => new Promise(resolve => setTimeout(resolve, 1000));
mutex = null;
file = "file: ";
save = async v => {
  console.log("saving", v, file);
  let data = file;
  await slow();
  data += v;
  file = data;
  console.log("saved", v, file);
};
locked = async cb => {
  while (mutex) { await mutex; }
  mutex = new Promise(async resolve => {
    await cb();
    mutex = null;
    resolve();
  });
};
locked(() => save(1)); locked(() => save(2)); locked(() => save(3));
Member

Mardak commented Oct 9, 2017

Here's a mutex/lock wrapper that keeps the mutex logic separate from the save logic:

slow = () => new Promise(resolve => setTimeout(resolve, 1000));
mutex = null;
file = "file: ";
save = async v => {
  console.log("saving", v, file);
  let data = file;
  await slow();
  data += v;
  file = data;
  console.log("saved", v, file);
};
locked = async cb => {
  while (mutex) { await mutex; }
  mutex = new Promise(async resolve => {
    await cb();
    mutex = null;
    resolve();
  });
};
locked(() => save(1)); locked(() => save(2)); locked(() => save(3));

@rlr rlr added the PR / Needs work label Oct 9, 2017

@csadilek

This comment has been minimized.

Show comment
Hide comment
@csadilek

csadilek Oct 10, 2017

Collaborator

@rlr ok perfect, thanks! @Mardak Thanks for confirming with Nate. Recalculating the card type (now vs trending) can probably wait as it's deactivated right now? We're waiting for #3402.

That would leave filtering for dismissed URLs as last step. If we're gonna do both though it might be easier to cache pocket's response directly and call transform on it after it was loaded from cache, rather than caching the transformed result and transforming again after load....hm...maybe just filter for blocked and deal with the rest later?

Collaborator

csadilek commented Oct 10, 2017

@rlr ok perfect, thanks! @Mardak Thanks for confirming with Nate. Recalculating the card type (now vs trending) can probably wait as it's deactivated right now? We're waiting for #3402.

That would leave filtering for dismissed URLs as last step. If we're gonna do both though it might be easier to cache pocket's response directly and call transform on it after it was loaded from cache, rather than caching the transformed result and transforming again after load....hm...maybe just filter for blocked and deal with the rest later?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 10, 2017

Member

The cache should probably be the raw data from the server and transform after being read from cache. The "now vs trending" is being calculated still. It's just the UI always only shows Trending.

Member

Mardak commented Oct 10, 2017

The cache should probably be the raw data from the server and transform after being read from cache. The "now vs trending" is being calculated still. It's just the UI always only shows Trending.

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 11, 2017

Member

on my machine, at home, most of the time, the network results are loaded before the disk cache 😬 I handle that properly though (I think).

The kind of cool thing though is turning off wifi, starting the browser and having top stories instead of empty boxes.

Member

rlr commented Oct 11, 2017

on my machine, at home, most of the time, the network results are loaded before the disk cache 😬 I handle that properly though (I think).

The kind of cool thing though is turning off wifi, starting the browser and having top stories instead of empty boxes.

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 12, 2017

Member

@Mardak I removed the mutex/locking because I don't think it's necessary anymore now that we keep an in memory copy and aren't reading before writing.

Coverage check isn't happy because the functions below aren't getting executed. Any ideas how to fix or skip that check?

XPCOMUtils.defineLazyGetter(this, "gTextDecoder", () => new TextDecoder());
XPCOMUtils.defineLazyGetter(this, "gInMemoryCache", () => new Map());
XPCOMUtils.defineLazyGetter(this, "gFilesLoaded", () => []);

Any other thoughts? r?

Member

rlr commented Oct 12, 2017

@Mardak I removed the mutex/locking because I don't think it's necessary anymore now that we keep an in memory copy and aren't reading before writing.

Coverage check isn't happy because the functions below aren't getting executed. Any ideas how to fix or skip that check?

XPCOMUtils.defineLazyGetter(this, "gTextDecoder", () => new TextDecoder());
XPCOMUtils.defineLazyGetter(this, "gInMemoryCache", () => new Map());
XPCOMUtils.defineLazyGetter(this, "gFilesLoaded", () => []);

Any other thoughts? r?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 12, 2017

Member

You can probably get line coverage by updating unit-entry.js to just call the lazy part of the defineLazyGetter. Although I avoided this for FilterAdult.jsm in #3422 by making them not lazy. The thinking there is if the module itself is lazily loaded until we actually need to start using it, additionally making items within the module lazy is overhead.

Member

Mardak commented Oct 12, 2017

You can probably get line coverage by updating unit-entry.js to just call the lazy part of the defineLazyGetter. Although I avoided this for FilterAdult.jsm in #3422 by making them not lazy. The thinking there is if the module itself is lazily loaded until we actually need to start using it, additionally making items within the module lazy is overhead.

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 12, 2017

Member

Not needing the mutex sounds right as the save operation doesn't really allow for concurrent mixing.

Probably rename the jsm to be the same thing exported. So I guess PersistentCache.jsm

Member

Mardak commented Oct 12, 2017

Not needing the mutex sounds right as the save operation doesn't really allow for concurrent mixing.

Probably rename the jsm to be the same thing exported. So I guess PersistentCache.jsm

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 12, 2017

Member

ok I think this is good (for now) for r?

I'm going to see if I can get something similar working with indexdb

Member

rlr commented Oct 12, 2017

ok I think this is good (for now) for r?

I'm going to see if I can get something similar working with indexdb

@Mardak

Mardak requested changes Oct 12, 2017 edited

Sorry, I guess this is basically rewriting PersistentCache but the two gInMemoryCache and gFilesLoaded doesn't seem quite right with the discrepancy between what's in memory vs disk. And there doesn't seem to be a need to actually have a shared global state?

I'm currently thinking of something like…

class {
  constructor(name, {preload}) {
    …;
    if (preload) {
      this._load();
    }
  }
  _load() {
    return this._cache || (this._cache = new Promise(async resolve => {
      let data = {};
      …; // the load from file stuff
      resolve(data);
    }));
  }
  async get(key) {
    const data = await this._load();
    return? : data[key];
  }
  async set(key, value) {
    const data = await this._load();
    data[key] = value;
    this._persist(data);
  }
}

Where _load returns the same Promise that resolves to whatever object it initialized or read from disk from its first invocation.

I think it'll be quite a bit cleaner this way, but feel free to push back ;)

Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 13, 2017

Member

@Mardak alrighty. ^ that came out nice I think.

Member

rlr commented Oct 13, 2017

@Mardak alrighty. ^ that came out nice I think.

@Mardak

A few questions. In particular removing the Promise from _persist. Otherwise should be good!

* @param {boolean} preload (optional). Whether the cache should be preloaded from file. Defaults to false.
*/
constructor(name, preload = false) {
this.name = name;

This comment has been minimized.

@Mardak

Mardak Oct 13, 2017

Member

Let's just compute the file name once:
this._filename = `${name}.json`;

@Mardak

Mardak Oct 13, 2017

Member

Let's just compute the file name once:
this._filename = `${name}.json`;

Show outdated Hide outdated system-addon/lib/PersistentCache.jsm Outdated
Show outdated Hide outdated system-addon/lib/TopStoriesFeed.jsm Outdated
@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 13, 2017

Member

Oh actually, I wonder if we should be putting our cache files into an activity stream directory.. or at least prefix them with something, e.g., activity-stream.${name}.json ?

Member

Mardak commented Oct 13, 2017

Oh actually, I wonder if we should be putting our cache files into an activity stream directory.. or at least prefix them with something, e.g., activity-stream.${name}.json ?

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 13, 2017

Member

Frames (60fps) from first paint to placeholders, strings, stories

preload = false:
26, 33, 46
21, 28, 36
21, 29, 40
21, 29, 42
25, 35, 41

preload = true:
23, 29, 39
31, 41, 41
23, 35, 40
20, 27, 37
17, 23, 29

So median time to stories with false is 41 frames vs true is 39 frames. So yes preload?

Member

Mardak commented Oct 13, 2017

Frames (60fps) from first paint to placeholders, strings, stories

preload = false:
26, 33, 46
21, 28, 36
21, 29, 40
21, 29, 42
25, 35, 41

preload = true:
23, 29, 39
31, 41, 41
23, 35, 40
20, 27, 37
17, 23, 29

So median time to stories with false is 41 frames vs true is 39 frames. So yes preload?

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 13, 2017

Member
Member

rlr commented Oct 13, 2017

@Mardak

This comment has been minimized.

Show comment
Hide comment
@Mardak

Mardak Oct 13, 2017

Member

You can indeed await a non-Promise value. But I actually meant you don't need to re-assign to this._cache. That value only needs to be assigned once on _load.

Member

Mardak commented Oct 13, 2017

You can indeed await a non-Promise value. But I actually meant you don't need to re-assign to this._cache. That value only needs to be assigned once on _load.

@rlr

This comment has been minimized.

Show comment
Hide comment
@rlr

rlr Oct 13, 2017

Member
Member

rlr commented Oct 13, 2017

async set(key, value) {
const data = await this._load();
data[key] = value;
this._persist(data);

This comment has been minimized.

@rlr

rlr Oct 13, 2017

Member

I guess I don't really need to pass data here.

@rlr

rlr Oct 13, 2017

Member

I guess I don't really need to pass data here.

This comment has been minimized.

@Mardak

Mardak Oct 13, 2017

Member

The passing data is to avoid awaiting for _cache in _persist as we already got it a few lines back.

@Mardak

Mardak Oct 13, 2017

Member

The passing data is to avoid awaiting for _cache in _persist as we already got it a few lines back.

* Load the cache into memory if it isn't already.
*/
_load() {
return this._cache || (this._cache = new Promise(async resolve => {

This comment has been minimized.

@rlr

rlr Oct 13, 2017

Member

hmm. I think I am still confused as to how this._cache goes from being a Promise to being an object.

@rlr

rlr Oct 13, 2017

Member

hmm. I think I am still confused as to how this._cache goes from being a Promise to being an object.

This comment has been minimized.

@rlr

rlr Oct 13, 2017

Member

I guess it is always a Promise but we are updating the underlying object it resolved?

@rlr

rlr Oct 13, 2017

Member

I guess it is always a Promise but we are updating the underlying object it resolved?

This comment has been minimized.

@Mardak

Mardak Oct 13, 2017

Member

this._cache is always a Promise but when we await it, we get the same original resolved object each time.

@Mardak

Mardak Oct 13, 2017

Member

this._cache is always a Promise but when we await it, we get the same original resolved object each time.

review comments have been addressed

@Mardak

Mardak approved these changes Oct 13, 2017

@Mardak Mardak merged commit 1527d05 into mozilla:master Oct 13, 2017

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@rlr rlr deleted the rlr:gh3606/pocket-cache branch Oct 13, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment