Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PERF] use micro-queue #4668

Merged
merged 1 commit into from
Dec 12, 2016
Merged

[PERF] use micro-queue #4668

merged 1 commit into from
Dec 12, 2016

Conversation

runspired
Copy link
Contributor

@runspired runspired commented Nov 18, 2016

@stefanpenner and I were discussing how it's better to use a more specialized micro-queue than to push everything into backburner a few days ago. I was spelunking through a profile and noticed that pushing a bunch of records of one time wasn't batched even though I thought it should be, turns out the run.join + the excess calls to schedule were really hurting us. In a few limited runs this seemed to help by as much as 30% on the time to push a bunch of records of the same type into the store.

EDIT

So upon request, I separated this out into two separate PRs (see #4698 for first change and numbers associated with it). Here are the numbers for making "just this change" to current master:

Numbers are measuring store._push for 238 records: 34 primary records and 204 related records (6 per primary record). Numbers are in milliseconds and from 35 runs comparing current master to this PR.

current master

{
  "min": "17.30",
  "q1": "23.76",
  "q2": "24.86",
  "q3": "27.17",
  "max": "40.92"
}

PR

{
  "min": "17.58",
  "q1": "24.07",
  "q2": "26.19",
  "q3": "27.41",
  "max": "41.63"
}

Obviously this doesn't seem like much of a win at first, if at all, other than a simpler data flow story (we time/batch all the relationship updates together).

However, much like how we didn't see much of an improvement in #4698 either, here are the numbers for once you combine these two techniques.

{
  "min": "16.67",
  "q1": "20.22",
  "q2": "22.41",
  "q3": "24.95",
  "max": "35.48"
}

@@ -1872,7 +1872,8 @@ Store = Service.extend({
}
if (data) {
// normalize relationship IDs into records
this._backburner.schedule('normalizeRelationships', this, this._setupRelationships, internalModel, data);
this._pushedInternalModels.push(internalModel, data);
this._backburner.schedule('normalizeRelationships', this, this._setupRelationships);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should also check for an existing flush

@stefanpenner
Copy link
Member

The idea is, lets use backburner for "the flush"

@runspired
Copy link
Contributor Author

There's issues with this PR (beyond the wrong check for !== null) that I haven't finished debugging. This change leads to adapterRun being called many more times.

this._backburner.schedule('normalizeRelationships', this, this._setupRelationships, internalModel, data);
this._pushedInternalModels.push(internalModel, data);
if (this._relationshipFlush === null) {
this._relationshipFlush = this._backburner.schedule('normalizeRelationships', this, this._setupRelationships);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather then forcing the caller to notice this. I would make _setupRelationships do this, that way we don't need to leak the scheduling to the caller.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed

@runspired
Copy link
Contributor Author

So officially I'm seeing about 5% improvement overall with this change on my usual production setup, but it also seems to have significantly smoothed out the spikes and the cpu profile and timelines are much cleaner.

operation = 'deleteRecord';
} else {
operation = 'updateRecord';
}

resolver.resolve(_commit(adapter, this, operation, snapshot));
}

pending.forEach((pendingItem) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems odd here, is this just leftover?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is, will remove

@@ -2265,48 +2272,52 @@ Store = Service.extend({

@method _push
@private
@param {Object} data
@param {Object} apiDoc
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't immediately grok what apiDoc means here, is this a common term? Maybe you could add a short description after the name to make it clearer to folks?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe jsonApiDoc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ jsonApiDoc

},

_hasModelFor(modelName) {
return !!getOwner(this)._lookupFactory(`model:${modelName}`);
},

_pushedInternalModels: null,
_relationshipFlush: null,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be set to null in the constructor (init) above.

@@ -252,7 +252,7 @@ test('buildURL - buildURL takes the records from findMany', function(assert) {
}
}
});
post.get('comments').then(assert.wait(function(post) {
post.get('comments').then(assert.wait(function(rel) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

creeped in from a test failure i was investigating, will scrub

@@ -2672,7 +2699,7 @@ function _commit(adapter, store, operation, snapshot) {
if (adapterPayload) {
payload = normalizeResponseHelper(serializer, store, modelClass, adapterPayload, snapshot.id, operation);
if (payload.included) {
store.push({ data: payload.included });
store._push({ data: null, included: payload.included });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 good one

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the difference between store.push and store._push here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bcardarella the main difference is that it doesn't materialize the records, however it also will likely become the basis for a new public API that's currently going through the RFC process.

@@ -2265,48 +2272,52 @@ Store = Service.extend({

@method _push
@private
@param {Object} data
@param {Object} apiDoc
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe jsonApiDoc?

@runspired
Copy link
Contributor Author

Following Meeting feedback, I'll be separating this into two PRs. I'll leave this one for the micro-queue and pull the move of run.join to _push into a separate PR.

@@ -2632,7 +2657,7 @@ Store = Service.extend({
return;
}

assert(`A ${relationship.parentType} record was pushed into the store with the value of ${relationship.key} being '${inspect(resourceIdentifiers)}', but ${relationship.key} is a hasMany relationship so the value must be an array. You should probably check your data payload or serializer.`, Array.isArray(resourceIdentifiers));
assert(`A ${relationship.record.modelName} record was pushed into the store with the value of ${relationship.key} being '${inspect(resourceIdentifiers)}', but ${relationship.key} is a hasMany relationship so the value must be an array. You should probably check your data payload or serializer.`, Array.isArray(resourceIdentifiers));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

@@ -2621,7 +2646,7 @@ Store = Service.extend({
return;
}

assert(`A ${relationship.parentType} record was pushed into the store with the value of ${relationship.key} being ${inspect(resourceIdentifier)}, but ${relationship.key} is a belongsTo relationship so the value must not be an array. You should probably check your data payload or serializer.`, !Array.isArray(resourceIdentifier));
assert(`A ${relationship.record.modelName} record was pushed into the store with the value of ${relationship.key} being ${inspect(resourceIdentifier)}, but ${relationship.key} is a belongsTo relationship so the value must not be an array. You should probably check your data payload or serializer.`, !Array.isArray(resourceIdentifier));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made this change because relationship.parentType is not a real thing which led to this always reading undefined.

@bmac bmac merged commit fef5302 into emberjs:master Dec 12, 2016
@bmac
Copy link
Member

bmac commented Dec 12, 2016

Thanks @runspired.

surfacedamage pushed a commit to Aperta-project/Aperta that referenced this pull request Oct 31, 2017
Ember Data 2.12 introduced a bug which prevents destroys and saves 
happening within the same run loop.  We had originally updated Ember to 
2.13 and then found this issue when testing repeaters which routinely 
destroy and save within the same run loop, thus exposing the problem.

Since this bug is subtle and may be present in unknown places elsewhere
in the codebase, it was determined that downgrading to 2.11.* makes the
most sense as a temporary fix until the project can eventually be updated
to the latest version of Ember Data.

We originally attempted upgrading to several NEWER version of Ember Data
(instead of downgrading), but there were many other test failures that
showed up and not enough time to fix the root causes, so this is a 
sensible temporary fix.

bug introduced = emberjs/data#4668
bug reported = emberjs/data#4993
bug fixed = emberjs/data#4994
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants