Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Bug 862870 - [email] Implement slice batching to reduce/minimize UI reflows, improve responsiveness, sync time #280

Merged
merged 2 commits into from

2 participants

@asutherland
Collaborator

This is Mason's #277 with a review fix commit and the two commits squashed since my review is now concluded. I left my changes in their own commit for my own git history understanding.

changm and others added some commits
@changm changm Bug 862870. Implement splice batching with forced flushes if batching…
… too much. r=asuth
59efcdc
@asutherland asutherland Bug 862870 - [email] Implement slice batching to reduce/minimize UI r…
…eflows, improve responsiveness, sync time. r=asuth, review fixes.

Minor naming nits and some test cleanups/improvements.  I also added some docs
to the test helpers in the process of adding new helpers.
3ac2c09
@asutherland asutherland merged commit 0af073b into mozilla-b2g:master

1 check passed

Details default The Travis CI build passed
@asutherland asutherland deleted the asutherland:batch-slice-review branch
@asutherland asutherland referenced this pull request from a commit in asutherland/gaia
@asutherland asutherland Bug 862870 - [email] Implement slice batching to reduce/minimize UI r…
…eflows, improve responsiveness, sync time. r=asuth

Land mozilla-b2g/gaia-email-libs-and-more#280
dd28959
@snowmantw snowmantw referenced this pull request from a commit in snowmantw/gaia-doc
Greg Weng Update Gaia API documents for #dd2895976591bb67fd5b259402c46e6bb274515f
    Bug 862870 - [email] Implement slice batching to reduce/minimize UI reflows, improve responsiveness, sync time. r=asuth

Land mozilla-b2g/gaia-email-libs-and-more#280
e6ffb17
@hwine hwine referenced this pull request from a commit in mozilla/gecko-dev
Gaia Pushbot Bumping gaia.json for 2 gaia-central revision(s) a=gaia-bump
========

https://hg.mozilla.org/integration/gaia-central/rev/7e7fe9513146
Author: Andrew Sutherland <asutherland@asutherland.org>
Desc: Merge pull request #15327 from asutherland/mail-slice-batching

Bug 862870 - [email] Implement slice batching to reduce/minimize UI reflows, improve responsiveness, sync time. r=asuth

========

https://hg.mozilla.org/integration/gaia-central/rev/cadc3e8774f3
Author: Andrew Sutherland <asutherland@asutherland.org>
Desc: Bug 862870 - [email] Implement slice batching to reduce/minimize UI reflows, improve responsiveness, sync time. r=asuth

Land mozilla-b2g/gaia-email-libs-and-more#280
9cb801f
@arroway arroway referenced this pull request from a commit in arroway/gaia
@asutherland asutherland Bug 862870 - [email] Implement slice batching to reduce/minimize UI r…
…eflows, improve responsiveness, sync time. r=asuth

Land mozilla-b2g/gaia-email-libs-and-more#280
7b3609b
@alivedise alivedise referenced this pull request from a commit in alivedise/email
@asutherland asutherland Bug 862870 - [email] Implement slice batching to reduce/minimize UI r…
…eflows, improve responsiveness, sync time. r=asuth

Land mozilla-b2g/gaia-email-libs-and-more#280
0c2e059
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jan 14, 2014
  1. @changm @asutherland

    Bug 862870. Implement splice batching with forced flushes if batching…

    changm authored asutherland committed
    … too much. r=asuth
  2. @asutherland

    Bug 862870 - [email] Implement slice batching to reduce/minimize UI r…

    asutherland authored
    …eflows, improve responsiveness, sync time. r=asuth, review fixes.
    
    Minor naming nits and some test cleanups/improvements.  I also added some docs
    to the test helpers in the process of adding new helpers.
This page is out of date. Refresh to see the latest.
View
2  data/lib/mailapi/activesync/folder.js
@@ -1384,7 +1384,7 @@ ActiveSyncFolderSyncer.prototype = {
initialSync: function(slice, initialDays, syncCallback,
doneCallback, progressCallback) {
- syncCallback('sync', false, true);
+ syncCallback('sync', true /* Ignore Headers */);
this.folderConn.sync(
$date.NOW(),
this.onSyncCompleted.bind(this, doneCallback, true),
View
13 data/lib/mailapi/imap/folder.js
@@ -548,7 +548,6 @@ console.log('BISECT CASE', serverUIDs.length, 'curDaysDelta', curDaysDelta);
// build the list of requests based on downloading required.
var requests = [];
bodyInfo.bodyReps.forEach(function(rep, idx) {
-
// attempt to be idempotent by only requesting the bytes we need if we
// actually need them...
if (rep.isDownloaded)
@@ -961,7 +960,7 @@ ImapFolderSyncer.prototype = {
*/
initialSync: function(slice, initialDays, syncCallback,
doneCallback, progressCallback) {
- syncCallback('sync', false);
+ syncCallback('sync', false /* Ignore Headers */);
// We want to enter the folder and get the box info so we can know if we
// should trigger our SYNC_WHOLE_FOLDER_AT_N_MESSAGES logic.
// _timelySyncSearch is what will get called next either way, and it will
@@ -1090,11 +1089,7 @@ ImapFolderSyncer.prototype = {
// intended to be a new thing for each request. So we don't want extra
// desire building up, so we set it to what we have every time.
//
- // We don't want to affect this value in accumulating mode, however, since
- // it could result in sending more headers than actually requested over the
- // wire.
- if (!this._syncSlice._accumulating)
- this._syncSlice.desiredHeaders = this._syncSlice.headers.length;
+ this._syncSlice.desiredHeaders = this._syncSlice.headers.length;
if (this._curSyncDoneCallback)
this._curSyncDoneCallback(err);
@@ -1288,10 +1283,6 @@ console.log("folder message count", folderMessageCount,
this._doneSync();
return;
}
- else if (this._syncSlice._accumulating) {
- // flush the accumulated results thus far
- this._syncSlice.setStatus('synchronizing', true, true, true);
- }
// - Increase our search window size if we aren't finding anything
// Our goal is that if we are going backwards in time and aren't finding
View
1  data/lib/mailapi/imap/imapchew.js
@@ -382,7 +382,6 @@ exports.chewHeaderAndBodyStructure =
*
*/
exports.updateMessageWithFetch = function(header, body, req, res, _LOG) {
-
var bodyRep = body.bodyReps[req.bodyRepIndex];
// check if the request was unbounded or we got back less bytes then we
View
13 data/lib/mailapi/imap/protocol/sync.js
@@ -142,9 +142,16 @@ Sync.prototype = {
if (!self.oncomplete)
return;
- self.oncomplete(
- self.newUIDs.length,
- self.knownUIDs.length
+ // Need a timeout here because we batch slices in SliceBridgeProxy and
+ // only want to call oncomplete after all those slices have been sent
+ // to keep the order the same.
+ window.setZeroTimeout(
+ function() {
+ self.oncomplete(
+ self.newUIDs.length,
+ self.knownUIDs.length
+ );
+ }
);
});
}
View
284 data/lib/mailapi/mailapi.js
@@ -849,8 +849,9 @@ MailHeader.prototype = {
__update: function(wireRep) {
this._wireRep = wireRep;
- if (wireRep.snippet !== null)
+ if (wireRep.snippet !== null) {
this.snippet = wireRep.snippet;
+ }
this.isRead = wireRep.flags.indexOf('\\Seen') !== -1;
this.isStarred = wireRep.flags.indexOf('\\Flagged') !== -1;
@@ -2106,6 +2107,12 @@ function MailAPI() {
this._slices = {};
this._pendingRequests = {};
this._liveBodies = {};
+ /**
+ * Functions to invoke to actually process/fire splices. Exists to support
+ * the fallout of waiting for contact resolution now that slice changes are
+ * batched.
+ */
+ this._spliceFireFuncs = [];
// Store bridgeSend messages received before back end spawns.
this._storedSends = [];
@@ -2177,7 +2184,8 @@ MailAPI.prototype = {
* Process a message received from the bridge.
*/
__bridgeReceive: function ma___bridgeReceive(msg) {
- if (this._processingMessage) {
+ // Pong messages are used for tests
+ if (this._processingMessage && msg.type !== 'pong') {
this._deferredMessages.push(msg);
}
else {
@@ -2193,8 +2201,9 @@ MailAPI.prototype = {
}
try {
var done = this[methodName](msg);
- if (!done)
+ if (!done) {
this._processingMessage = msg;
+ }
}
catch (ex) {
internalError('Problem handling message type:', msg.type, ex,
@@ -2219,96 +2228,130 @@ MailAPI.prototype = {
return true;
},
- _recv_sliceSplice: function ma__recv_sliceSplice(msg, fake) {
+ _fireAllSplices: function() {
+ for (var i = 0; i < this._spliceFireFuncs.length; i++) {
+ var fireSpliceData = this._spliceFireFuncs[i];
+ fireSpliceData();
+ }
+
+ this._spliceFireFuncs.length = 0;
+ },
+
+ _recv_batchSlice: function receiveBatchSlice(msg) {
var slice = this._slices[msg.handle];
if (!slice) {
- unexpectedBridgeDataError('Received message about a nonexistent slice:',
- msg.handle);
+ unexpectedBridgeDataError("Received message about nonexistent slice:", msg.handle);
return true;
}
- var transformedItems = this._transform_sliceSplice(msg, slice);
- // It's possible that a transformed representation is depending on an async
- // call to mozContacts. In this case, we don't want to surface the data to
- // the UI until the contacts are fully resolved in order to avoid the UI
- // flickering or just triggering reflows that could otherwise be avoided.
+ var updateStatus = this._updateSliceStatus(msg, slice);
+ for (var i = 0; i < msg.sliceUpdates.length; i++) {
+ var update = msg.sliceUpdates[i];
+ if (update.type === 'update') {
+ // Updates are performed and fire immediately/synchronously
+ this._processSliceUpdate(msg, update, slice);
+ } else {
+ // Added items are transformed immediately, but the actual mutation of
+ // the slice and notifications do not fire until _fireAllSplices().
+ this._transformAndEnqueueSingleSplice(msg, update, slice);
+ }
+ }
+
+ // If there are pending contact resolutions, we need to wait them to
+ // complete before processing and firing the splices.
if (ContactCache.pendingLookupCount) {
ContactCache.callbacks.push(function contactsResolved() {
- this._fire_sliceSplice(msg, slice, transformedItems, fake);
+ this._fireAllSplices();
+ this._fireStatusNotifications(updateStatus, slice);
this._doneProcessingMessage(msg);
}.bind(this));
+ // (Wait for us to call _doneProcessingMessage before processing the next
+ // message. This also means this method will only push one callback.)
return false;
}
- else {
- this._fire_sliceSplice(msg, slice, transformedItems, fake);
- return true;
+
+ this._fireAllSplices();
+ this._fireStatusNotifications(updateStatus, slice);
+ return true; // All done processing; feel free to process the next msg.
+ },
+
+ _fireStatusNotifications: function (updateStatus, slice) {
+ if (updateStatus && slice.onstatus) {
+ slice.onstatus(slice.status);
}
},
- _transform_sliceSplice: function ma__transform_sliceSplice(msg, slice) {
- var addItems = msg.addItems, transformedItems = [], i;
- switch (slice._ns) {
- case 'accounts':
- for (i = 0; i < addItems.length; i++) {
- transformedItems.push(new MailAccount(this, addItems[i], slice));
- }
- break;
+ _updateSliceStatus: function(msg, slice) {
+ // - generate namespace-specific notifications
+ slice.atTop = msg.atTop;
+ slice.atBottom = msg.atBottom;
+ slice.userCanGrowUpwards = msg.userCanGrowUpwards;
+ slice.userCanGrowDownwards = msg.userCanGrowDownwards;
- case 'identities':
- for (i = 0; i < addItems.length; i++) {
- transformedItems.push(new MailSenderIdentity(this, addItems[i]));
- }
- break;
+ // Have to update slice status before we actually do the work
+ var generatedStatusChange = (msg.status &&
+ (slice.status !== msg.status ||
+ slice.syncProgress !== msg.progress));
- case 'folders':
- for (i = 0; i < addItems.length; i++) {
- transformedItems.push(new MailFolder(this, addItems[i]));
- }
- break;
+ if (msg.status) {
+ slice.status = msg.status;
+ slice.syncProgress = msg.syncProgress;
+ }
- case 'headers':
- for (i = 0; i < addItems.length; i++) {
- transformedItems.push(new MailHeader(slice, addItems[i]));
- }
- break;
+ return generatedStatusChange;
+ },
- case 'matchedHeaders':
- for (i = 0; i < addItems.length; i++) {
- transformedItems.push(new MailMatchedHeader(slice, addItems[i]));
+ _processSliceUpdate: function (msg, splice, slice) {
+ try {
+ for (var i = 0; i < splice.length; i += 2) {
+ var idx = splice[i], wireRep = splice[i + 1],
+ itemObj = slice.items[idx];
+ itemObj.__update(wireRep);
+ if (slice.onchange) {
+ slice.onchange(itemObj, idx);
}
- break;
-
-
- default:
- console.error('Slice notification for unknown type:', slice._ns);
- break;
+ if (itemObj.onchange) {
+ itemObj.onchange(itemObj, idx);
+ }
+ }
+ }
+ catch (ex) {
+ reportClientCodeError('onchange notification error', ex,
+ '\n', ex.stack);
}
+ },
- return transformedItems;
+ /**
+ * Transform the slice splice (for contact-resolution side-effects) and
+ * enqueue the eventual processing and firing of the splice once all contacts
+ * have been resolved.
+ */
+ _transformAndEnqueueSingleSplice: function(msg, splice, slice) {
+ var transformedItems = this._transform_sliceSplice(splice, slice);
+ var fake = false;
+ // It's possible that a transformed representation is depending on an async
+ // call to mozContacts. In this case, we don't want to surface the data to
+ // the UI until the contacts are fully resolved in order to avoid the UI
+ // flickering or just triggering reflows that could otherwise be avoided.
+ // Since we could be processing multiple updates, just batch everything here
+ // and we'll check later to see if any of our splices requires a contact
+ // lookup
+ this._spliceFireFuncs.push(function singleSpliceUpdate() {
+ this._fireSplice(splice, slice, transformedItems, fake);
+ }.bind(this));
},
- _fire_sliceSplice: function ma__fire_sliceSplice(msg, slice,
- transformedItems, fake) {
+ /**
+ * Perform the actual splice, generating notifications.
+ */
+ _fireSplice: function(splice, slice, transformedItems, fake) {
var i, stopIndex, items, tempMsg;
- // - generate namespace-specific notifications
- slice.atTop = msg.atTop;
- slice.atBottom = msg.atBottom;
- slice.userCanGrowUpwards = msg.userCanGrowUpwards;
- slice.userCanGrowDownwards = msg.userCanGrowDownwards;
- if (msg.status &&
- (slice.status !== msg.status ||
- slice.syncProgress !== msg.progress)) {
- slice.status = msg.status;
- slice.syncProgress = msg.progress;
- if (slice.onstatus)
- slice.onstatus(slice.status);
- }
// - generate slice 'onsplice' notification
if (slice.onsplice) {
try {
- slice.onsplice(msg.index, msg.howMany, transformedItems,
- msg.requested, msg.moreExpected, fake);
+ slice.onsplice(splice.index, splice.howMany, transformedItems,
+ splice.requested, splice.moreExpected, fake);
}
catch (ex) {
reportClientCodeError('onsplice notification error', ex,
@@ -2316,10 +2359,10 @@ MailAPI.prototype = {
}
}
// - generate item 'onremove' notifications
- if (msg.howMany) {
+ if (splice.howMany) {
try {
- stopIndex = msg.index + msg.howMany;
- for (i = msg.index; i < stopIndex; i++) {
+ stopIndex = splice.index + splice.howMany;
+ for (i = splice.index; i < stopIndex; i++) {
var item = slice.items[i];
if (slice.onremove)
slice.onremove(item, i);
@@ -2335,13 +2378,15 @@ MailAPI.prototype = {
}
}
// - perform actual splice
- slice.items.splice.apply(slice.items,
- [msg.index, msg.howMany].concat(transformedItems));
+ slice.items.splice.apply(
+ slice.items,
+ [splice.index, splice.howMany].concat(transformedItems));
+
// - generate item 'onadd' notifications
if (slice.onadd) {
try {
- stopIndex = msg.index + transformedItems.length;
- for (i = msg.index; i < stopIndex; i++) {
+ stopIndex = splice.index + transformedItems.length;
+ for (i = splice.index; i < stopIndex; i++) {
slice.onadd(slice.items[i], i);
}
}
@@ -2352,7 +2397,7 @@ MailAPI.prototype = {
}
// - generate 'oncomplete' notification
- if (msg.requested && !msg.moreExpected) {
+ if (splice.requested && !splice.moreExpected) {
slice._growing = 0;
if (slice.pendingRequestCount)
slice.pendingRequestCount--;
@@ -2362,7 +2407,8 @@ MailAPI.prototype = {
// reset before calling in case it wants to chain.
slice.oncomplete = null;
try {
- completeFunc(msg.newEmailCount);
+ // Maybe defer here?
+ completeFunc(splice.newEmailCount);
}
catch (ex) {
reportClientCodeError('oncomplete notification error', ex,
@@ -2372,31 +2418,46 @@ MailAPI.prototype = {
}
},
- _recv_sliceUpdate: function ma__recv_sliceUpdate(msg) {
- var slice = this._slices[msg.handle];
- if (!slice) {
- unexpectedBridgeDataError('Received message about a nonexistent slice:',
- msg.handle);
- return true;
- }
+ _transform_sliceSplice: function ma__transform_sliceSplice(splice, slice) {
+ var addItems = splice.addItems, transformedItems = [], i;
+ switch (slice._ns) {
+ case 'accounts':
+ for (i = 0; i < addItems.length; i++) {
+ transformedItems.push(new MailAccount(this, addItems[i], slice));
+ }
+ break;
- var updates = msg.updates;
- try {
- for (var i = 0; i < updates.length; i += 2) {
- var idx = updates[i], wireRep = updates[i + 1],
- itemObj = slice.items[idx];
- itemObj.__update(wireRep);
- if (slice.onchange)
- slice.onchange(itemObj, idx);
- if (itemObj.onchange)
- itemObj.onchange(itemObj, idx);
- }
- }
- catch (ex) {
- reportClientCodeError('onchange notification error', ex,
- '\n', ex.stack);
+ case 'identities':
+ for (i = 0; i < addItems.length; i++) {
+ transformedItems.push(new MailSenderIdentity(this, addItems[i]));
+ }
+ break;
+
+ case 'folders':
+ for (i = 0; i < addItems.length; i++) {
+ transformedItems.push(new MailFolder(this, addItems[i]));
+ }
+ break;
+
+ case 'headers':
+ for (i = 0; i < addItems.length; i++) {
+ transformedItems.push(new MailHeader(slice, addItems[i]));
+ }
+ break;
+
+ case 'matchedHeaders':
+ for (i = 0; i < addItems.length; i++) {
+ transformedItems.push(new MailMatchedHeader(slice, addItems[i]));
+ }
+ break;
+
+
+ default:
+ console.error('Slice notification for unknown type:', slice._ns);
+ break;
}
- return true;
+
+ return transformedItems;
},
_recv_sliceDead: function(msg) {
@@ -3386,10 +3447,13 @@ MailAPI.prototype = {
// Diagnostics / Test Hacks
/**
- * Send a 'ping' to the bridge which will send a 'pong' back, notifying the
- * provided callback. This is intended to be hack to provide a way to ensure
- * that some function only runs after all of the notifications have been
- * received and processed by the back-end.
+ * After a setZeroTimeout, send a 'ping' to the bridge which will send a
+ * 'pong' back, notifying the provided callback. This is intended to be hack
+ * to provide a way to ensure that some function only runs after all of the
+ * notifications have been received and processed by the back-end.
+ *
+ * Note that ping messages are always processed as they are received; they do
+ * not get deferred like other messages.
*/
ping: function(callback) {
var handle = this._nextHandle++;
@@ -3397,10 +3461,20 @@ MailAPI.prototype = {
type: 'ping',
callback: callback,
};
- this.__bridgeSend({
- type: 'ping',
- handle: handle,
- });
+
+ // With the introduction of slice batching, we now wait to send the ping.
+ // This is reasonable because there are conceivable situations where the
+ // caller really wants to wait until all related callbacks fire before
+ // dispatching. And the ping method is already a hack to ensure correctness
+ // ordering that should be done using better/more specific methods, so this
+ // change is not any less of a hack/evil, although it does cause misuse to
+ // potentially be more capable of causing intermittent failures.
+ window.setZeroTimeout(function() {
+ this.__bridgeSend({
+ type: 'ping',
+ handle: handle,
+ });
+ }.bind(this));
},
_recv_pong: function(msg) {
View
1  data/lib/mailapi/mailbridge.js
@@ -446,6 +446,7 @@ MailBridge.prototype = {
var idx = bsearchMaybeExists(proxy.markers, marker, strcmp);
if (idx === null)
continue;
+
proxy.sendUpdate([idx, folderMeta]);
}
},
View
113 data/lib/mailapi/mailslice.js
@@ -128,29 +128,16 @@ var SYNC_START_MINIMUM_PROGRESS = 0.02;
* Book-keeping and limited agency for the slices.
*
* === Batching ===
+ * Headers are removed, added, or modified using the onHeader* methods.
+ * The updates are sent to 'SliceBridgeProxy' which batches updates and
+ * puts them on the event loop. We batch so that we can minimize the number of
+ * reflows and painting on the DOM side. This also enables us to batch data
+ * received in network packets around the smae time without having to handle it in
+ * each protocol's logic.
*
- * We do a few different types of batching based on the current sync state,
- * with these choices being motivated by UX desires and some efficiency desires
- * (in pursuit of improved UX). We want the user to feel like they get their
- * messages quickly, but we also don't want messages jumping all over the
- * screen.
- *
- * - Fresh sync (all messages are new to us): Messages are being added from
- * most recent to oldest. Currently, we just let this pass through, but
- * we might want to do some form of limited time-based batching. (ex:
- * wait 50ms or for notification of completion before sending a batch).
- *
- * - Refresh (sync): No action required because we either already have the
- * messages or get them in efficient-ish batches. This is followed by
- * what should be minimal changes (and where refresh was explicitly chosen
- * to be used rather than date sync for this reason.)
- *
- * - Date sync (some messages are new, some messages are known): We currently
- * get the known headers added one by one from youngest to oldest, followed
- * by the new messages also youngest to oldest. The notional UX (enforced
- * by unit tests) for this is that we want all the changes coherently and with
- * limits made effective. To this end, we do not generate any splices until
- * sync is complete and then generate a single slice.
+ * Currently, we only batch updates that are done between 'now' and the next time
+ * a zeroTimeout can fire on the event loop. In order to keep the UI responsive,
+ * We force flushes if we have more than 5 pending slices to send.
*/
function MailSlice(bridgeHandle, storage, _parentLog) {
this._bridgeHandle = bridgeHandle;
@@ -174,11 +161,6 @@ function MailSlice(bridgeHandle, storage, _parentLog) {
this.waitingOnData = false;
/**
- * When true, we are not generating splices and are just accumulating state
- * in this.headers.
- */
- this._accumulating = false;
- /**
* If true, don't add any headers. This is used by ActiveSync during its
* synchronization step to wait until all headers have been retrieved and
* then the slice is populated from the database. After this initial sync,
@@ -241,10 +223,7 @@ MailSlice.prototype = {
return;
if (this.headers.length) {
- // If we're accumulating, we were starting from zero to begin with, so
- // there is no need to send a nuking splice.
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(0, this.headers.length, [], false, true);
+ this._bridgeHandle.sendSplice(0, this.headers.length, [], false, true);
this.headers.splice(0, this.headers.length);
this.startTS = null;
@@ -294,11 +273,10 @@ MailSlice.prototype = {
this.userCanGrowDownwards = false;
var delCount = this.headers.length - lastIndex - 1;
this.desiredHeaders -= delCount;
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(
- lastIndex + 1, delCount, [],
- // This is expected; more coming if there's a low-end splice
- true, firstIndex > 0);
+ this._bridgeHandle.sendSplice(
+ lastIndex + 1, delCount, [],
+ // This is expected; more coming if there's a low-end splice
+ true, firstIndex > 0);
this.headers.splice(lastIndex + 1, this.headers.length - lastIndex - 1);
var lastHeader = this.headers[lastIndex];
this.startTS = lastHeader.date;
@@ -308,8 +286,7 @@ MailSlice.prototype = {
this.atTop = false;
this.userCanGrowUpwards = false;
this.desiredHeaders -= firstIndex;
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(0, firstIndex, [], true, false);
+ this._bridgeHandle.sendSplice(0, firstIndex, [], true, false);
this.headers.splice(0, firstIndex);
var firstHeader = this.headers[0];
this.endTS = firstHeader.date;
@@ -342,30 +319,8 @@ MailSlice.prototype = {
this._updateSliceFlags();
break;
}
- if (flushAccumulated && this._accumulating) {
- if (this.headers.length > this.desiredHeaders) {
- this.headers.splice(this.desiredHeaders,
- this.headers.length - this.desiredHeaders);
- this.endTS = this.headers[this.headers.length - 1].date;
- this.endUID = this.headers[this.headers.length - 1].id;
- }
-
- this._accumulating = false;
- this._bridgeHandle.status = status;
- // XXX remove concat() once our bridge sending makes rep sharing
- // impossible by dint of actual postMessage or JSON roundtripping.
- this._bridgeHandle.sendSplice(0, 0, this.headers.concat(),
- requested, moreExpected,
- newEmailCount);
- // If we're no longer synchronizing, we want to update desiredHeaders
- // to avoid accumulating extra 'desire'.
- if (status !== 'synchronizing')
- this.desiredHeaders = this.headers.length;
- }
- else {
- this._bridgeHandle.sendStatus(status, requested, moreExpected, progress,
+ this._bridgeHandle.sendStatus(status, requested, moreExpected, progress,
newEmailCount);
- }
},
/**
@@ -425,9 +380,8 @@ MailSlice.prototype = {
}
this._updateSliceFlags();
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(insertAt, 0, headers,
- true, moreComing);
+ this._bridgeHandle.sendSplice(insertAt, 0, headers,
+ true, moreComing);
},
/**
@@ -448,14 +402,12 @@ MailSlice.prototype = {
// end up with more headers than originally planned; if we get told about
// headers earlier than the last slot, we will insert them and grow without
// forcing a removal of something else to offset.
- if (hlen >= this.desiredHeaders && idx === hlen &&
- !this._accumulating)
+ if (hlen >= this.desiredHeaders && idx === hlen)
return;
- // If we are inserting (not at the end) and not accumulating (in which case
- // we can chop off the excess before we tell about it), then be sure to grow
+ // If we are inserting (not at the end) then be sure to grow
// the number of desired headers to be consistent with the number of headers
// we have.
- if (hlen >= this.desiredHeaders && !this._accumulating)
+ if (hlen >= this.desiredHeaders)
this.desiredHeaders++;
if (this.startTS === null ||
@@ -478,10 +430,9 @@ MailSlice.prototype = {
}
this._LOG.headerAdded(idx, header);
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(idx, 0, [header],
- Boolean(this.waitingOnData),
- Boolean(this.waitingOnData));
+ this._bridgeHandle.sendSplice(idx, 0, [header],
+ Boolean(this.waitingOnData),
+ Boolean(this.waitingOnData));
this.headers.splice(idx, 0, header);
},
@@ -499,9 +450,7 @@ MailSlice.prototype = {
// There is no identity invariant to ensure this is already true.
this.headers[idx] = header;
this._LOG.headerModified(idx, header);
- // If we are accumulating, the update will be observed.
- if (!this._accumulating)
- this._bridgeHandle.sendUpdate([idx, header]);
+ this._bridgeHandle.sendUpdate([idx, header]);
}
},
@@ -515,10 +464,9 @@ MailSlice.prototype = {
var idx = bsearchMaybeExists(this.headers, header, cmpHeaderYoungToOld);
if (idx !== null) {
this._LOG.headerRemoved(idx, header);
- if (!this._accumulating)
- this._bridgeHandle.sendSplice(idx, 1, [],
- Boolean(this.waitingOnData),
- Boolean(this.waitingOnData));
+ this._bridgeHandle.sendSplice(idx, 1, [],
+ Boolean(this.waitingOnData),
+ Boolean(this.waitingOnData));
this.headers.splice(idx, 1);
// update time-ranges if required...
@@ -2357,12 +2305,9 @@ FolderStorage.prototype = {
// -- Bad existing data, issue a sync
var progressCallback = slice.setSyncProgress.bind(slice);
- var syncCallback = function syncCallback(syncMode, accumulateMode,
+ var syncCallback = function syncCallback(syncMode,
ignoreHeaders) {
slice.waitingOnData = syncMode;
- if (accumulateMode && slice.headers.length === 0) {
- slice._accumulating = true;
- }
if (ignoreHeaders) {
slice.ignoreHeaders = true;
}
View
2  data/lib/mailapi/pop3/sync.js
@@ -335,7 +335,7 @@ Pop3FolderSyncer.prototype = {
* at a time.
*/
initialSync: function(slice, initialDays, syncCb, doneCb, progressCb) {
- syncCb('sync', false /* accumulateMode */, true /* ignoreHeaders */);
+ syncCb('sync', true /* ignoreHeaders */);
this.sync('initial', slice, doneCb, progressCb);
},
View
61 data/lib/mailapi/slice_bridge_proxy.js
@@ -30,6 +30,12 @@ function SliceBridgeProxy(bridge, ns, handle) {
*/
this.userCanGrowUpwards = false;
this.userCanGrowDownwards = false;
+ /**
+ * We batch both slices and updates into the same queue. The MailAPI checks
+ * to differentiate between the two.
+ */
+ this.pendingUpdates = [];
+ this.scheduledUpdate = false;
}
exports.SliceBridgeProxy = SliceBridgeProxy;
@@ -42,33 +48,25 @@ SliceBridgeProxy.prototype = {
*/
sendSplice: function sbp_sendSplice(index, howMany, addItems, requested,
moreExpected, newEmailCount) {
- this._bridge.__sendMessage({
- type: 'sliceSplice',
- handle: this._handle,
+ var updateSplice = {
index: index,
howMany: howMany,
addItems: addItems,
requested: requested,
moreExpected: moreExpected,
- status: this.status,
- progress: this.progress,
- atTop: this.atTop,
- atBottom: this.atBottom,
- userCanGrowUpwards: this.userCanGrowUpwards,
- userCanGrowDownwards: this.userCanGrowDownwards,
newEmailCount: newEmailCount,
- });
+ type: 'slice',
+ };
+ this.addUpdate(updateSplice);
},
/**
* Issue an update for existing items.
*/
sendUpdate: function sbp_sendUpdate(indexUpdatesRun) {
- this._bridge.__sendMessage({
- type: 'sliceUpdate',
- handle: this._handle,
- updates: indexUpdatesRun,
- });
+ var update = indexUpdatesRun;
+ update.type = 'update';
+ this.addUpdate(update);
},
/**
@@ -78,8 +76,9 @@ SliceBridgeProxy.prototype = {
sendStatus: function sbp_sendStatus(status, requested, moreExpected,
progress, newEmailCount) {
this.status = status;
- if (progress != null)
+ if (progress != null) {
this.progress = progress;
+ }
this.sendSplice(0, 0, [], requested, moreExpected, newEmailCount);
},
@@ -88,6 +87,36 @@ SliceBridgeProxy.prototype = {
this.sendSplice(0, 0, [], true, true);
},
+ addUpdate: function sbp_addUpdate(update) {
+ this.pendingUpdates.push(update);
+ // If we batched a lot, flush now. Otherwise
+ // we sometimes get into a position where nothing happens
+ // and then a bunch of updates occur, causing jank
+ if (this.pendingUpdates.length > 5) {
+ this.flushUpdates();
+ } else if (!this.scheduledUpdate) {
+ window.setZeroTimeout(this.flushUpdates.bind(this));
+ this.scheduledUpdate = true;
+ }
+ },
+
+ flushUpdates: function sbp_flushUpdates() {
+ this._bridge.__sendMessage({
+ type: 'batchSlice',
+ handle: this._handle,
+ status: this.status,
+ progress: this.progress,
+ atTop: this.atTop,
+ atBottom: this.atBottom,
+ userCanGrowUpwards: this.userCanGrowUpwards,
+ userCanGrowDownwards: this.userCanGrowDownwards,
+ sliceUpdates: this.pendingUpdates
+ });
+
+ this.pendingUpdates = [];
+ this.scheduledUpdate = false;
+ },
+
die: function sbp_die() {
if (this.__listener)
this.__listener.die();
View
5 test/test-files.json
@@ -143,6 +143,11 @@
"variants": ["imap:fake", "imap:real", "activesync:fake", "pop3:fake"]
},
+ "test_splice_ordering.js": {
+ "variants_why": "nothing account-specific gets tested",
+ "variants": ["imap:fake"]
+ },
+
"test_account_logic.js": {
"variants": ["imap:fake", "imap:real", "activesync:fake", "pop3:fake"]
},
View
51 test/unit/resources/th_contacts.js
@@ -52,9 +52,18 @@ var TestContactsMixins = {
self._dbByEmail = {};
self._dbByContactId = {};
+ self._trappedFindCalls = null;
+
+ /**
+ * The fake mozContacts API.
+ *
+ * We support trapping by partially running the func and returning early.
+ * When the traps complete, the method gets called again with the request
+ * object originally returned. This structuring is arbitrary.
+ */
self.contactsAPI = window.navigator.mozContacts = {
- find: function(options) {
+ find: function(options, _hackReq) {
if (!options ||
options.filterBy.length !== 1 ||
(options.filterBy[0] !== 'email' &&
@@ -63,9 +72,22 @@ var TestContactsMixins = {
self._logger.unsupportedFindCall(options);
throw new Error("Unsupported find call!");
}
- self._logger.apiFind_begin(options.filterBy[0], options.filterOp,
- options.filterValue, null);
- var req = { onsuccess: null, onerror: null };
+ var req;
+ if (_hackReq) {
+ req = _hackReq;
+ }
+ else {
+ req = { onsuccess: null, onerror: null };
+ self._logger.apiFind_begin(options.filterBy[0], options.filterOp,
+ options.filterValue, null);
+ }
+
+ if (self._trappedFindCalls) {
+ self._trappedFindCalls.push(
+ { options: options, req: req });
+ return req;
+ }
+
window.setZeroTimeout(function() {
if (!req.onsuccess)
return;
@@ -257,6 +279,27 @@ var TestContactsMixins = {
this._logger.clearContacts();
this.contactsAPI._clear();
},
+
+ /**
+ * Cause calls to mozContacts.find() to get stored but not processed.
+ */
+ trapFindCalls: function() {
+ this._trappedFindCalls = [];
+ },
+
+ /**
+ * Cause all the calls to mozContacts.find() that `trapFindCalls` stuck in
+ * limbo to actually run now.
+ */
+ releaseFindCalls: function() {
+ var trapped = this._trappedFindCalls;
+ this._trappedFindCalls = null;
+
+ for (var i = 0; i < trapped.length; i++) {
+ var trap = trapped[i];
+ this.contactsAPI.find(trap.options, trap.req);
+ }
+ },
};
var LOGFAB = exports.LOGFAB = $log.register($module, {
View
30 test/unit/resources/th_main.js
@@ -215,8 +215,8 @@ var TestUniverseMixins = {
MailUniverse = self.universe = new $mailuniverse.MailUniverse(
function onUniverse() {
console.log('Universe created');
- var TMB = MailBridge = new $mailbridge.MailBridge(self.universe,
- self.__name);
+ var TMB = MailBridge = self._mailBridge =
+ new $mailbridge.MailBridge(self.universe, self.__name);
var TMA = MailAPI = self.MailAPI = new $mailapi.MailAPI();
var realSendMessage = $router.registerSimple(
@@ -260,6 +260,18 @@ var TestUniverseMixins = {
});
},
+ /**
+ * Spin up a MailAPI instance on the main thread whose MailBridge instance
+ * lives here (on the worker thread), and which relays the message via a
+ * 'mail-bridge' router topic. This is to allow for test code to run on the
+ * main thread where WebAPIs are different than what a worker sees.
+ *
+ * This is different from our bounced bridge instantiated in
+ * TestUniverseMixins.__constructor where we create a MailAPI here on the
+ * worker and bridge the data to the main thread and back again just so we
+ * can test structured cloning impact and simplify our unit tests by really
+ * just running on one thread.
+ */
ensureMainThreadMailAPI: function() {
if (this._mainThreadMailBridge)
return;
@@ -861,6 +873,20 @@ var TestCommonAccountMixins = {
},
/**
+ * Given a viewThing from do_openFolderView/etc., get the SliceBridgeProxy
+ * corresponding to the viewThing. Use this if you want to poke and prod at
+ * the slice.
+ */
+ getSliceBridgeProxyForView: function(viewThing) {
+ var bridge = this.testUniverse._mailBridge;
+ var mailSlice = viewThing.slice;
+ var proxy = bridge._slices[mailSlice._handle];
+ if (!proxy)
+ throw new Error('Unable to find SliceBridgeProxy!');
+ return proxy;
+ },
+
+ /**
* Alter the flag-equivalents on the server without affecting our local state.
*/
modifyMessageFlagsOnServerButNotLocally: function(viewThing, indices,
View
3  test/unit/test_imap_lazybodies.js
@@ -177,11 +177,12 @@ TD.commonCase('sync headers then download body', function(T, RT) {
// ASCII 4 bytes assumed
var snippet = content[1].slice(0, 4);
- eLazy.expect_namedValue('snippet', snippet);
+ // With batching, this order has to come first
eLazy.expect_namedValue('body', {
isDownloaded: false,
amountDownloaded: 4
});
+ eLazy.expect_namedValue('snippet', snippet);
eLazy.expect_namedValue('body full', {
isDownloaded: true,
View
95 test/unit/test_splice_ordering.js
@@ -0,0 +1,95 @@
+/**
+ * Test slice/splice batching and ordering.
+ **/
+
+define(['rdcommon/testcontext', './resources/th_main',
+ './resources/th_contacts',
+ 'mailapi/mailapi',
+ 'exports'],
+ function($tc, $th_main, $th_contacts,
+ $mailapi,
+ exports) {
+
+var TD = exports.TD = $tc.defineTestsFor(
+ { id: 'test_splice_ordering' }, null,
+ [$th_main.TESTHELPER, $th_contacts.TESTHELPER], ['app']);
+
+
+/**
+ * Test that if we send a slice to the backend, it processes them
+ * in the correct order
+ */
+TD.commonCase('Check Slices In Order', function(T, RT) {
+ T.group('setup');
+ // Create an empty universe just to create proper slices for us
+ var testUniverse = T.actor('testUniverse', 'U'),
+ testAccount = T.actor('testAccount', 'A', { universe: testUniverse }),
+ testContacts = T.actor('testContacts', 'contacts'),
+ eLazy = T.lazyLogger('misc');
+
+ var inboxFolder = testAccount.do_useExistingFolderWithType('inbox', '');
+ // Open a slice just for the sake of having a slice; don't care what happens.
+ var inboxView = testAccount.do_openFolderView(
+ 'syncs', inboxFolder, null, null,
+ { syncedToDawnOfTime: 'ignore' });
+
+ T.group('test ordering');
+ T.action(eLazy, "trap contact lookups, send slice updates, see no updates",
+ function() {
+ eLazy.expect_namedValue('pendingLookupCount', 1);
+ eLazy.expect_namedValue('processingMessage', null);
+ // We don't want the slice notifications or the contact resolution to happen
+ // this step. But we do want to make sure that the batching setZeroTimeout
+ // has had a chance to fire and that roundtripping of messages has fully
+ // occurred. MailAPI.ping() includes both a zeroTimeout and the
+ // roundtripping.
+ eLazy.expect_event('roundtrip');
+
+ var bridgeProxy = testAccount.getSliceBridgeProxyForView(inboxView);
+ var sendSplice = bridgeProxy.sendSplice.bind(bridgeProxy);
+
+ // Make calls to mozContacts.find() not return until releaseFindCalls().
+ testContacts.trapFindCalls();
+
+ // Ask to resolve a contact. This will cause pendingLookupCount to hit 1,
+ // which will make the splice processing wait until we resolve the contacts.
+ testUniverse.MailAPI.resolveEmailAddressToPeep(
+ 'bob@bob.nul',
+ function(peep) {
+ eLazy.event('contact resolved!');
+ });
+ // Do check the lookup count did what we expected and didn't activate other
+ // request processing deferral logic.
+ eLazy.namedValue('pendingLookupCount',
+ $mailapi.ContactCache.pendingLookupCount);
+ eLazy.namedValue('processingMessage',
+ testUniverse.MailAPI._processingMessage);
+
+ inboxView.slice.onsplice = function() {
+ eLazy.event('splice!');
+ };
+
+ // Send an empty splice, goes in batch 1.
+ sendSplice(0, 0, [], 0, false, false);
+ // Send another empty splice, also goes in batch 1.
+ sendSplice(0, 0, [], 0, false, false);
+
+ testUniverse.MailAPI.ping(function() {
+ eLazy.event('roundtrip');
+ });
+ });
+
+ T.action(eLazy, "resolve contact, see splices", function() {
+ var mailapi = testUniverse.MailAPI;
+
+ eLazy.expect_event('contact resolved!');
+ eLazy.expect_event('splice!');
+ eLazy.expect_event('splice!');
+
+ testContacts.releaseFindCalls();
+ });
+
+});
+
+}); // end define
+
Something went wrong with that request. Please try again.