-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Bug 1169576 - [Messages][NG] Implement Conversation service: method for streaming joined threads and drafts list. r=julien #31051
Bug 1169576 - [Messages][NG] Implement Conversation service: method for streaming joined threads and drafts list. r=julien #31051
Conversation
@@ -79,9 +79,9 @@ | |||
<script defer src="views/shared/js/inter_instance_event_dispatcher.js"></script> | |||
<script defer src="views/shared/js/time_headers.js"></script> | |||
<script defer src="views/shared/js/contacts.js"></script> | |||
<script defer src="views/shared/js/utils.js"></script> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Utils is required by Drafts, and import-scripts trick will try to call ``importScripts` for the Utils since it's not loaded yet - and it will fail in window context :)
'Conversation with id %s is retrieved from stream', thread.id | ||
); | ||
|
||
onConversationRetrieved(new Thread(thread)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since client is hosted altogether with the view, it's responsible for returning complete model, not just raw data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should even cache /store the thread here, but we can do it later since we might also need to rewrite Threads in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my concern here is that we're calling onConversationRetrieved synchronously; might be one of the reasons of the slowness ? Wondering if we should leverage the Utils.async tool we have and use promises instead ? Or maybe promises are too "microtask" and that wouldn't change anything, and we'd need (eg) a requestAnimationFrame (maybe inside the callback instead of here though).
Quite thinking out loud here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my concern here is that we're calling onConversationRetrieved synchronously; might be one of the reasons of the slowness ?
Hmm, I'll check, but I was thinking that this should not affect performance overall. I mean that this doesn't stop cursor from iterating, once thread is retrieved, it's passed further via asynchronous postMessage from iframe to ShW (via BroadcastChannel) and then again asynchronously through MessageChannel to Window. I was assuming we render thread faster then next thread goes through this chain.... But as always need to measure first :)
Some "funny" numbers for you guys @steveck-chung @julienw. Here is raptor result (20 RUNS, reference-workload-light, for the extracted (views/inbox/index.html in manifest entry point) Inbox view):
So going to experiment and try batching... |
90ms for each thread instead of 15ms. Ouch. |
* Type description for the ConversationSummary object that is return from the | ||
* Conversation service. | ||
* @typedef {Object} ConversationSummary | ||
* @property {number} id Unique identifier of the the conversation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be a string (for drafts, draftId is actually a string -- in my mind back then, I had id that started with draft-
for drafts and thread-
for threads -- could be conversation now btw)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, currently it should be number for both thread [1] and draft [2]. Am I missing something?
gaia/apps/sms/services/js/drafts.js
Line 209 in ba33139
throw new Error('Draft id must be a number'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmm ok, I likely thought of the DOM id and what I would like to see for draft/conversation ids in my mind ;)
I'm mainly afraid of collisions if we don't have a 100%-safe way to generate different ids for drafts and conversations. Maybe when we'll have a gaia-based DB we can ensure this with numbers but nowadays I don't see how this is reliably possible. We're just lucky for now ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe when we'll have a gaia-based DB we can ensure this with numbers but nowadays I don't see how this is reliably possible. We're just lucky for now ;)
Yep, exactly! I'll keep number since now it reflects the current state of things and we can migrate to strings later on.
getAllConversationsStream.listen((thread) => { | ||
debug('Conversation with id %s is retrieved from stream', thread.id); | ||
|
||
onConversationRetrieved(new Thread(thread)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I'll check, but I was thinking that this should not affect performance overall. I mean that this doesn't stop cursor from iterating, once thread is retrieved, it's passed further via asynchronous postMessage from iframe to ShW (via BroadcastChannel) and then again asynchronously through MessageChannel to Window. I was assuming we render thread faster then next thread goes through this chain.... But as always need to measure first :)
Be careful with DOM Events, they're actually rarely asynchronous. (cross-frames postMessage actually is, but not sure for BroadcastChannels)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
, but not sure for BroadcastChannels
I believe they're async as well. Just tried the following test case:
var sender = new BroadcastChannel('channel');
var receiver = new BroadcastChannel('channel');
receiver.onmessage = () => {
console.log('Time when on message handler invoked %s', new Date());
var payload = Array.from({length: 9999999});
payload.forEach(() => {});
payload.forEach(() => {});
console.log('Time when on message handler completed %s', new Date());
}
console.log('Time before message is posted %s', new Date());
sender.postMessage('a');
console.log('Time after message is posted %s', new Date());
Output is:
Time before message is posted Fri Jul 31 2015 19:53:52 GMT+0200 (CEST)
Time after message is posted Fri Jul 31 2015 19:53:52 GMT+0200 (CEST)
Time when on message handler invoked Fri Jul 31 2015 19:53:53 GMT+0200 (CEST)
Time when on message handler completed Fri Jul 31 2015 19:54:04 GMT+0200 (CEST)
But @bakulf should know for sure.
'use strict'; | ||
|
||
var shimHostIframe = document.querySelector('.shim-host'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: How about we move ConversationClient.init(App.instanceId) to here as well? Maybe it's better to aggregate the client init/service bootstrap in same place and startup doesn't need to reference App or shim related client. (I was thinking that this part could be in Startup.init but I'm fine if you want to sync with shim bootstrap)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that this part could be in Startup.init but I'm fine if you want to sync with shim bootstrap)
Yeah, maybe it will be better to keep this in startup - will move it there.
I'll put here performance numbers I've got so far, one github comment per approach I've investigated: 1st approach: master (light workload): Don't have much to say here, "performance spikes" are likely related to maxReadAheadEntries setting, when Gecko reads new page.
|
2nd approach: master (light workload) + small change that makes master behaves a bit more like ConversationService - wait for the Drafts.request before rendering threads. It's just for more fair comparison with ConversationService :) First we start threads cursor, then call
|
3rd approach: ConversationClient ---SharedWorker(MessageChannel)---> ConversationService ---BC---> MobileMessageShim (iframe) That's our original idea, where main service in SharedWorker and shim is inside iframe. So here are known problems:
|
4th approach: ConversationClient ---window(MessageChannel)---> ConversationService (window) ---BC---> MobileMessageShim (iframe) Here I've moved ConversationService to the main window to avoid SharedWorker startup performance problems. Bridge still uses MessageChannel for same window communication. It's funny, but I don't see any improvement at all. There is a huge delay between the moment when ConversationClient is connected to ConversationService and when first method call reaches ConversationService. In between we see that iframe is loaded and started to bootstrap + one guess is that time spent on MessageChannel initialization, but I don't see such delay in SharedWorker case.... Going to measure how long does it take to establish message channel.
|
5th approach: ConversationClient ---LocalBroadcastChannel---> ConversationService (window) ---BC---> MobileMessageShim (iframe) Here is the same as 4th approach, but I've replaced Again here is something weird going on between connection and first method call - 100ms is too much for same window communication. Also here we see that someone competes with
|
6th approach: ConversationClient ---LocalBroadcastChannel---> ConversationService (window) ---LocalBroadcastChannel---> MobileMessageShim (iframe) Here I tried to replace Saved ~250ms more for the full list, not impressive. Still ton of time is spent on loading and connection with iframe service.
|
Thank you for narrowing down where the time is being spent here Oleg. I want to get these core use-cases inside a test app in the Bridge repo so we can keep an eye on the areas we need to target in Gecko/Gaia. We really need platform guidance on this! |
7th approach: ConversationClient ---window(MessageChannel)---> ConversationService (window) ---window(MessageChannel)---> MobileMessageShim (window) Here all services and clients are in the same window, no iframe, no Sooo, numbers are quite bad, something is definitely wrong here. We don't create two
|
8th approach: ConversationClient ---LocalBroadcastChannel---> ConversationService (window) ---LocalBroadcastChannel---> MobileMessageShim (window) So the most workaround-y approach here, all services are in the same window, no iframes, no BroadcastChannel, no MessageChannels. Speed is only influenced by my first-crappy LocalBroadcastChannel custom endpoint implementation and bridge. We've got +300ms boost comparing to best case above. The main problems here are:
|
I know I'm repeating myself :) did you try to profile using WebIDE ? This may help finding where this 100ms delay comes from. Now as an answer to #31051 (comment), it's definitely weird. Only thing I can think is that we have events competing with microtasks (or maybe internal gecko messages?) -- events would have less priority. Again, only profiling could help, in my opinion. |
Not yet, but going to try today! Finished with raptor, now will get used to WebIDE profiler :) |
…or streaming joined threads and drafts list. r=julien, schung
…vice-get-threads Bug 1169576 - [Messages][NG] Implement Conversation service: method for streaming joined threads and drafts list. r=julien
…ion-service-get-threads Revert "Merge pull request #31051 from azasypkin/bug-1169576-conversation-service-get-threads"
No description provided.