New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance improvements in web-interface #421

Closed
hugbug opened this Issue Jul 27, 2017 · 6 comments

Comments

Projects
None yet
2 participants
@hugbug
Member

hugbug commented Jul 27, 2017

Inspired by discussion in #419.

There is a potential to improve web-interface in regard to performance and efficiency when transferring data between web-browser and NZBGet.

This issue is to investigate areas of web-interface where such improvements are possible and are worth a possibly increased complexity. The following areas were identified so far:

  • reduce number of requests when initially loading web-interface;
  • call multiple API methods simultaneously;
  • reduce amount of data transferred on UI updates by avoiding sending of unchanged data;
  • support for keep alive connections.

@hugbug hugbug added the feature label Jul 27, 2017

@hugbug hugbug added this to the v20 milestone Jul 27, 2017

hugbug added a commit that referenced this issue Jul 27, 2017

#421: reduce number of requests when loading webui
by combining all javascript-files into one and all css-files into one
@hugbug

This comment has been minimized.

Show comment
Hide comment
@hugbug

hugbug Jul 28, 2017

Member

Reduce number of requests when initially loading web-interface

Web-interface in NZBGet consists of one html-page (index.html), many javascript-files, two css-files and a couple of image files. After it is loaded the javascript-application communicates with NZBGet via API to get information of current program state.

  • html: 1 file;
  • javascript: 4 library files + 11 nzbget files;
  • css: 1 library file + 1 nzbget file;
  • images: 4 files.
  • api: 6 calls.

When loading web-interface the browser has to load each required file with a separate request.
Total: 28 requests.

Javascript files represent a half of all load requests. A known technique to minimise the number of requests is to pack all files into one big file. That can be achieved when building (compiling) NZBGet. This would however slow down the development of web-interface.

Instead of combining files during build step another solution was developed. NZBGet's web-server can join the files. To instruct the web-server what to join a special syntax for URL requests were introduced.

Instead of:

<script language="javascript" type="text/javascript" src="index.js"></script>
<script language="javascript" type="text/javascript" src="util.js"></script>
<script language="javascript" type="text/javascript" src="downloads.js"></script>
...

we now write:

<script language="javascript" type="text/javascript" 
    src="combined.js?index.js+util.js+downloads.js+..."></script>

Two css-files were also combined into one.

What about debugging?

It's not really debugging friendly to have one javascript file consisting of more than 10K lines. To make debugging easier the combining of javascript-files is active only when NZBGet is compiled in release mode. In debug builds each javascript-file is loaded individually.

Results

The total number of requests during initial loading of web-interface has been reduced from 28 to 14. The amount of transferred data isn't affected much by this change (as expected).

How much is the profit in (milli)seconds? See the next post.

Member

hugbug commented Jul 28, 2017

Reduce number of requests when initially loading web-interface

Web-interface in NZBGet consists of one html-page (index.html), many javascript-files, two css-files and a couple of image files. After it is loaded the javascript-application communicates with NZBGet via API to get information of current program state.

  • html: 1 file;
  • javascript: 4 library files + 11 nzbget files;
  • css: 1 library file + 1 nzbget file;
  • images: 4 files.
  • api: 6 calls.

When loading web-interface the browser has to load each required file with a separate request.
Total: 28 requests.

Javascript files represent a half of all load requests. A known technique to minimise the number of requests is to pack all files into one big file. That can be achieved when building (compiling) NZBGet. This would however slow down the development of web-interface.

Instead of combining files during build step another solution was developed. NZBGet's web-server can join the files. To instruct the web-server what to join a special syntax for URL requests were introduced.

Instead of:

<script language="javascript" type="text/javascript" src="index.js"></script>
<script language="javascript" type="text/javascript" src="util.js"></script>
<script language="javascript" type="text/javascript" src="downloads.js"></script>
...

we now write:

<script language="javascript" type="text/javascript" 
    src="combined.js?index.js+util.js+downloads.js+..."></script>

Two css-files were also combined into one.

What about debugging?

It's not really debugging friendly to have one javascript file consisting of more than 10K lines. To make debugging easier the combining of javascript-files is active only when NZBGet is compiled in release mode. In debug builds each javascript-file is loaded individually.

Results

The total number of requests during initial loading of web-interface has been reduced from 28 to 14. The amount of transferred data isn't affected much by this change (as expected).

How much is the profit in (milli)seconds? See the next post.

@hugbug

This comment has been minimized.

Show comment
Hide comment
@hugbug

hugbug Jul 28, 2017

Member

Call multiple API methods simultaneously

Web-interface is a javascript application communicating with NZBGet via API. When web-interface is initially loaded it sends six API requests to NZBGet to receive various informations such as program configuration, list of downloads, history, message list. Then the javascript application on regular basis (once per seconds by default) updates information on the page; for that it sends again API requests to NZBGet, four requests for each update (list of downloads, history, messages, status).

All that requests are currently sent in a chain - when the first request is completed the second is sent and so on.

To improve the overall load time the API requests should be sent simultaneously. That was implemented.

Member

hugbug commented Jul 28, 2017

Call multiple API methods simultaneously

Web-interface is a javascript application communicating with NZBGet via API. When web-interface is initially loaded it sends six API requests to NZBGet to receive various informations such as program configuration, list of downloads, history, message list. Then the javascript application on regular basis (once per seconds by default) updates information on the page; for that it sends again API requests to NZBGet, four requests for each update (list of downloads, history, messages, status).

All that requests are currently sent in a chain - when the first request is completed the second is sent and so on.

To improve the overall load time the API requests should be sent simultaneously. That was implemented.

@hugbug

This comment has been minimized.

Show comment
Hide comment
@hugbug

hugbug Jul 28, 2017

Member

Speed test results

That was not so easy to measure, the speed varies a lot. I've run each test many times and then took the best number. The average numbers were considerably worse.

NZBGet and web-browser both running on the same PC

Results are in seconds, until the web-interface is fully loaded. First number is from NZBGet v19.1. The second number is after the first improvement, the last number - after both improvements.

  • Firefox: 1.24 -> 0.84 -> 0.50
  • Firefox SSL: 1.88 -> 1.05 -> 0.80
  • Chrome: 0.65 -> 0.60 -> 0.52
  • Chrome SLL: 0.81 -> 0.73 -> 0.55
  • Safari: 0.37 -> 0.37 -> 0.30

NZBGet runs on a PVR Linux box, web-browser on PC

First number is from NZBGet v19.1. The last number is after both improvements.

  • Firefox: 1.40 -> 0.62
  • Firefox SSL: 1.84 -> 0.78
  • Chrome: 0.80 -> 0.64
  • Chrome SLL: 1.05 -> 0.68
  • Safari: 0.47 -> 0.40
Member

hugbug commented Jul 28, 2017

Speed test results

That was not so easy to measure, the speed varies a lot. I've run each test many times and then took the best number. The average numbers were considerably worse.

NZBGet and web-browser both running on the same PC

Results are in seconds, until the web-interface is fully loaded. First number is from NZBGet v19.1. The second number is after the first improvement, the last number - after both improvements.

  • Firefox: 1.24 -> 0.84 -> 0.50
  • Firefox SSL: 1.88 -> 1.05 -> 0.80
  • Chrome: 0.65 -> 0.60 -> 0.52
  • Chrome SLL: 0.81 -> 0.73 -> 0.55
  • Safari: 0.37 -> 0.37 -> 0.30

NZBGet runs on a PVR Linux box, web-browser on PC

First number is from NZBGet v19.1. The last number is after both improvements.

  • Firefox: 1.40 -> 0.62
  • Firefox SSL: 1.84 -> 0.78
  • Chrome: 0.80 -> 0.64
  • Chrome SLL: 1.05 -> 0.68
  • Safari: 0.47 -> 0.40

hugbug added a commit that referenced this issue Jul 29, 2017

#421, #422: adjustments in ETag support
1) convert MD5 hash into string using standard method instead of base64;
2) if par2 isn’t available using another hash function from Util-unit;
3) avoid gzipping of response if it isn’t sent;
4) use BString class for header string formatting.

hugbug added a commit that referenced this issue Jul 30, 2017

#421, 422: added support for Etag an If-None-Match HTTP headers
The web server now support Etag generation for static files and some RPC
methods. If If-None-Match is given in the request and matches with the Etag
generated for the response than no data is sent and 304 or 412 is returned.

The JavaScript RPC calls also support the new HTTP error code by buffering
Etags and responses and will reuse the previous response if 412 is returned.

hugbug added a commit that referenced this issue Jul 30, 2017

#421, #422: adjustments in ETag support
1) convert MD5 hash into string using standard method instead of base64;
2) if par2 isn’t available using another hash function from Util-unit;
3) avoid gzipping of response if it isn’t sent;
4) use BString class for header string formatting.

hugbug added a commit that referenced this issue Jul 30, 2017

#421, #422: allow caching for more API methods
1) All safe methods are now cacheable.
2) Corrected debug code, accidentally pushed in previous commit (#ifdef
DISABLE_PARCHECK).

hugbug added a commit that referenced this issue Jul 30, 2017

#421, #422: do not parse json-response if it will not be used
… and small refactorings and fixes for error reporting

hugbug added a commit that referenced this issue Jul 31, 2017

#421: new option "RemoteTimeout"
to define timeout for incoming connections including timeout for
keep-alive.

hugbug added a commit that referenced this issue Aug 1, 2017

@hugbug

This comment has been minimized.

Show comment
Hide comment
@hugbug

hugbug Aug 1, 2017

Member

Reduce amount of data transferred on UI updates by avoiding sending of unchanged data

This feature has been contributed by @schnusch (#422).

NZBGet's built-in web-server (which provides web-interface and NZBGet API) now supports caching.

When serving a request the web-server computes a hash of the request and passes it in response as http-header along with the response body. Later, when browser needs to load the same page it passes the previously received hash back to NZBGet in request header. NZBGet generates a new page, computes a new hash and compares it with the hash from request. If the hashes match the web-servers returns only response headers, the response body is completely omitted.

This significantly reduces the amount of data transferred from NZBGet to web-browser.

NZBGet supports caching for static pages (web-interface javascript-files, images, etc.) and for API-requests. NZBGet web-interface periodically updates the page, for that it communicates with NZBGet web-server to receive current status, download queue, history and messages. Now, with caching if the download queue or history have not changed since the last request the amount of data transferred is significantly reduced (only response headers are transferred to browser).

Results

Web-interface loading

For initial load of web-interface (only static files, assuming empty download queue, history and messages) the files with total size of about 300KB (compressed size) are transferred. With caching, if the files are available in web-browser cache this number is reduced to only 8KB.

Status updates

The amount of data transferred during web-interface refreshes depends on the size of download queue and history. Changes in history are rare and the caching helps here a lot. For download queue the caching usually helps only if NZBGet isn't downloading.

SSL

When connecting to NZBGet via SSL the behaviour of caching depends on browser. Firefox caches all content. Safari and Chrome disable caching if the connection is marked as insecure, for example when using self-signed certificates.

Member

hugbug commented Aug 1, 2017

Reduce amount of data transferred on UI updates by avoiding sending of unchanged data

This feature has been contributed by @schnusch (#422).

NZBGet's built-in web-server (which provides web-interface and NZBGet API) now supports caching.

When serving a request the web-server computes a hash of the request and passes it in response as http-header along with the response body. Later, when browser needs to load the same page it passes the previously received hash back to NZBGet in request header. NZBGet generates a new page, computes a new hash and compares it with the hash from request. If the hashes match the web-servers returns only response headers, the response body is completely omitted.

This significantly reduces the amount of data transferred from NZBGet to web-browser.

NZBGet supports caching for static pages (web-interface javascript-files, images, etc.) and for API-requests. NZBGet web-interface periodically updates the page, for that it communicates with NZBGet web-server to receive current status, download queue, history and messages. Now, with caching if the download queue or history have not changed since the last request the amount of data transferred is significantly reduced (only response headers are transferred to browser).

Results

Web-interface loading

For initial load of web-interface (only static files, assuming empty download queue, history and messages) the files with total size of about 300KB (compressed size) are transferred. With caching, if the files are available in web-browser cache this number is reduced to only 8KB.

Status updates

The amount of data transferred during web-interface refreshes depends on the size of download queue and history. Changes in history are rare and the caching helps here a lot. For download queue the caching usually helps only if NZBGet isn't downloading.

SSL

When connecting to NZBGet via SSL the behaviour of caching depends on browser. Firefox caches all content. Safari and Chrome disable caching if the connection is marked as insecure, for example when using self-signed certificates.

@hugbug

This comment has been minimized.

Show comment
Hide comment
@hugbug

hugbug Aug 7, 2017

Member

Support for keep alive connections

Web-browser fetches multiple files from NZBGet web-server. The number of files was greatly reduced by other optimisations documented in this issue. Nonetheless there are 15 files to fetch for initial web-interface loading. Then during UI updates multiple API requests are sent to NZBGet.

After sending response for each request NZBGet closed connection. For another request a new connection had to be established. HTTP version 1.1 offers an improvement by allowing the connection to be kept open and be reused. Until now NZBGet didn't support this feature but now it does.

This is especially useful when connecting via SSL where connection negotiation (TLS handshake) is a costly process.

Member

hugbug commented Aug 7, 2017

Support for keep alive connections

Web-browser fetches multiple files from NZBGet web-server. The number of files was greatly reduced by other optimisations documented in this issue. Nonetheless there are 15 files to fetch for initial web-interface loading. Then during UI updates multiple API requests are sent to NZBGet.

After sending response for each request NZBGet closed connection. For another request a new connection had to be established. HTTP version 1.1 offers an improvement by allowing the connection to be kept open and be reused. Until now NZBGet didn't support this feature but now it does.

This is especially useful when connecting via SSL where connection negotiation (TLS handshake) is a costly process.

@hugbug hugbug closed this Aug 7, 2017

@Nothing4You

This comment has been minimized.

Show comment
Hide comment
@Nothing4You

Nothing4You Aug 16, 2017

Would it be possible to release a new testing version build with these changes?

Nothing4You commented Aug 16, 2017

Would it be possible to release a new testing version build with these changes?

hugbug added a commit that referenced this issue Aug 24, 2017

#421: update downloads table even if no changes
when there are active downloads in order to recalculate estimated time

hugbug added a commit that referenced this issue Sep 7, 2017

#432, #421, b4bcc82: remote-server cleanup
Use “close(socket)” when “accept”-ing connections and use
“shutdown(socket)” otherwise.

hugbug added a commit that referenced this issue Oct 9, 2017

#421: reduce number of requests when loading webui
by combining all javascript-files into one and all css-files into one

hugbug added a commit that referenced this issue Oct 9, 2017

#421, 422: added support for Etag an If-None-Match HTTP headers
The web server now support Etag generation for static files and some RPC
methods. If If-None-Match is given in the request and matches with the Etag
generated for the response than no data is sent and 304 or 412 is returned.

The JavaScript RPC calls also support the new HTTP error code by buffering
Etags and responses and will reuse the previous response if 412 is returned.

hugbug added a commit that referenced this issue Oct 9, 2017

#421, #422: adjustments in ETag support
1) convert MD5 hash into string using standard method instead of base64;
2) if par2 isn’t available using another hash function from Util-unit;
3) avoid gzipping of response if it isn’t sent;
4) use BString class for header string formatting.

hugbug added a commit that referenced this issue Oct 9, 2017

#421, #422: allow caching for more API methods
1) All safe methods are now cacheable.
2) Corrected debug code, accidentally pushed in previous commit (#ifdef
DISABLE_PARCHECK).

hugbug added a commit that referenced this issue Oct 9, 2017

#421, #422: do not parse json-response if it will not be used
… and small refactorings and fixes for error reporting

hugbug added a commit that referenced this issue Oct 9, 2017

#421: new option "RemoteTimeout"
to define timeout for incoming connections including timeout for
keep-alive.

hugbug added a commit that referenced this issue Oct 9, 2017

hugbug added a commit that referenced this issue Oct 9, 2017

#421: update downloads table even if no changes
when there are active downloads in order to recalculate estimated time

hugbug added a commit that referenced this issue Oct 9, 2017

#432, #421, b4bcc82: remote-server cleanup
Use “close(socket)” when “accept”-ing connections and use
“shutdown(socket)” otherwise.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment