-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move decoders to separate classes #1108
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good in general, a few comments inline
@@ -2,12 +2,11 @@ | |||
* noVNC: HTML5 VNC client | |||
* Copyright (C) 2012 Joel Martin | |||
* Copyright (C) 2018 Samuel Mannehed for Cendio AB | |||
* Copyright (C) 2018 Pierre Ossman for Cendio AB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If people are ok with it, I think we should just switch this over to The noVNC authors
-- less copyright churn, among other things. A number of other opensource projects do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with that. When I was the main author with a few people sending bug fixes it was fine just to have my name, but at this point it's a team project so we should reflect that. In the U.S. at least, an author has copyright regardless of whether they label it as such so it's more for documentation than any legal reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
core/rfb.js
Outdated
} | ||
|
||
_handleDataRect() { | ||
this._timing.last_fbu = (new Date()).getTime(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's just drop this timing code -- it's not really adding much, IMO. We can always instrument other ways (esp using the built-in browser dev tools)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same with the enc stats.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was really helpful at one point, and gave information that was difficult to get out of profilers (although they are admittedly much better now). If we are considering doing any potential performance impacting changes in the near future it might still be useful to have this. For example, WebAssembly is now supported at a level that it could support a good portion of noVNC users. Anybody inspired to attack that project?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WebAssembly in noVNC has been on my radar for a while now, just need to find time. Maybe this weekend...
I think the profilers should be good enough to get most of this now. /me shrugs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if you do start playing with WebAssembly let me know. I would be at least interested in seeing what you are working on and might be able to contribute next week to some degree. I might need something to procrastinate on. :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've started playing a bit. The problem that I found is that since most of our heavy lifting is copying data around, and webassembly operates on a fixed chunk of memory, we kind-of need the chain of "consume from websocket" to "render to screen" to be written in the source that translates to webassembly, otherwise we end up doing a lot more data copying, which would probably reduce performance.
Started playing around with that for fun (using Rust as the source language), but I also did some profiling (chrome's profiling tools are pretty good).
I recorded ls -alR
on a decent-sized file tree (a checkout of https://github.com/openshift/origin), coming from a qemu vnc server (which likes to sends lots of tiny TIGHT updates). TigerVNC takes around a minute to receive/render the frames in realtime. noVNC takes similar in realtime playback. In full-speed playback, it takes noVNC about 23 seconds.
Looking at a performance profile (plus some playing around), 14ish of those seconds are spent calling ctx.putImageData
(a native browser method). Another 2-5ish seconds are spent doing our actual computations in the TIGHT method (palette stuff, deflate, etc). AFAICT (doing some small tests), putImageData is actually fairly performance compared to a basic copy written in webassemly (that test was pretty loosely performed, so I can go back and retest more expansively if needed), so I'm not sure how much we gain from conversion to webassembly. If we're super-heavily receiving compressed data, is might help there, but I'd expect more modern servers to more aggressively send normal image data, which we can lean on the browser for heavily.
If people have recordings that seem like they're not performant, I'd love to take a poke at them, but it seems like we do pretty decently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DirectXMan12 I also did some prototyping/experiments over the weekend (long flight from Japan). I'm going to keep discussing this in the existing asm.js issue (I renamed it to reflect WebAssembly): #527
are the unit tests for tight insufficiently robust? If so, we should have better unit tests for tight. I'll try to do some manual testing of tight, but in general, we should try to make sure the unit tests are robust |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
@DirectXMan12, this is the state of the Tight tests: 😄 it.skip('should handle the TIGHT encoding', function () {
// TODO(directxman12): test this
});
it.skip('should handle the TIGHT_PNG encoding', function () {
// TODO(directxman12): test this
}); Unfortunately it is not trivial to make some test cases with the whole zlib thing going on. Edit: not |
Updated patch set that completely removes timing and stats. I've profiled this using |
The profiles in the browsers are much better these days and give us much better data than we can provide ourselves.
These have very special behaviour compared to normal data encodings, so separate out them and handle them separately.
Makes things a lot clearer by letting each encoding handle its own details and state.
@DirectXMan12, any objections to merging this? |
I had another look, looks good to me. |
Ping @DirectXMan12. |
LGTM, sorry for the delay |
Makes things a lot clearer by letting each encoding handle its own details and state.
I've just done the initial shuffle and basic tests right now. Need to do some profiling as well, and an attempt at handling alpha cursors (which requires this change).
If anyone can test TightPNG, then that would be helpful. @DirectXMan12?