Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add transparent compression of the .mem file, to make deployment easier #4

Open
jedie opened this issue Jun 1, 2015 · 40 comments
Open

Comments

@jedie
Copy link
Contributor

jedie commented Jun 1, 2015

Seems that it will be re-downloaded every time :(

Maybe because of the ending .mem ?

@rfk
Copy link
Contributor

rfk commented Jun 1, 2015

LIkely yes, they're probably using a whitelist of file types that can be gzipped and cached. Incidentally this is one of the reasons I was experimenting with an external CDN, which gives you more control ovr caching behaviour. (The other reason is so I can upload pre-zipped versions of the large assets, and do a better job of it than the default gzip-encoding settings in most webserver software).

@jedie
Copy link
Contributor Author

jedie commented Jun 2, 2015

But we can use this as a trigger. Because other users will also used a default server configuration and run into the same "Problem"...

So, what's about to change the extension?

@rfk
Copy link
Contributor

rfk commented Jun 4, 2015

Good idea. I'm a little hesitate to change too much from the way emscripten does it, but we do ultimately have control over the name of this file because of https://github.com/pypyjs/pypyjs/blob/master/tools/extract_memory_initializer.py

So we need to find an extension that will (1) ensure the content-type is set to something binary like application/octet-stream or similar, and (2) will trigger sensible caching and gzipping behaviour from github pages. Any suggestion?

@rfk
Copy link
Contributor

rfk commented Jun 4, 2015

(Although I note that it does seem to cache correctly for me, but it's not gzip-encoded so it takes around 12 seconds to download)

@rfk
Copy link
Contributor

rfk commented Jun 4, 2015

According to http://githubengineering.com/rearchitecting-github-pages/ the github pages setup is a pretty standard nginx install with fastly in front of it, meaning the default set of types that will get compressed is likely as described here:

https://www.fastly.com/blog/new-gzip-settings-and-deciding-what-to-compress

I don't think I'd be comfortable pretending that this binary blob is any of those file types :-(

@jedie
Copy link
Contributor Author

jedie commented Jun 4, 2015

Hm. That's boring.

The best solution would be a file type that worked on every server defaults.

Btw. is there no way to compress the file on creation and decompress it on-the-fly in browser? Maybe with a better compression ratio, like LZMA...

e.g.: https://github.com/nmrugg/LZMA-JS

So we independent on server configuration and maybe faster loading, because of better compression.

Maybe use LZMA on all file requests to speedup ?!?

@rfk
Copy link
Contributor

rfk commented Jun 4, 2015

Yes, we should definitely try that. LZMA does produce much better compression on these files, but I haven't benchmarked the LZMA-JS implementation on it.

The downside is it would break the browser's native caching facilities, but the tradeoff may be worth it. Feel like trying it out to see? :-)

@jedie jedie changed the title http://pypyjs.org/pypyjs-release/lib/pypy.vm.js.mem will not be cached?!? http://pypyjs.org/pypyjs-release/lib/pypy.vm.js.mem will not be cached?!? / compress with LZMA? Jun 11, 2015
@jedie
Copy link
Contributor Author

jedie commented Jun 11, 2015

lzma package exist in python since 3.3... I have made a test script to see how long it takes and how good is the compression ratio

the script: https://gist.github.com/jedie/95225b0ecb89f23688f3

Results:
(EDIT: remove results, because of https://gist.github.com/jedie/d650de636711aa786235 )

I don't know if the compressed data is really compatible with https://github.com/nmrugg/LZMA-JS

Interesting question is how good/bad the server gzip do the job. Maybe by default, it's configured for speed than for good compression ratio?!?

@rfk
Copy link
Contributor

rfk commented Jun 11, 2015

Thanks! That's definitely an improvement over gzip even at the highest compression level. I guess the next question is: how quickly does it decompress using LZMA-JS?

Interesting question is how good/bad the server gzip do the job.
Maybe by default, it's configured for speed than for good compression ratio?!?

Likely, I think it defaults to something like gzip -6. Trying to get better control over this is one of the reasons I was toying around with a CDN, where you can upload pre-zipped versions of the files.

@jedie
Copy link
Contributor Author

jedie commented Jun 12, 2015

I have made the same test for gzip... (Used zlib.compress) and add a "efficiency" value:

efficiency is the relation between compression ratio to duration:

(EDIT big output removed -> https://gist.github.com/jedie/d650de636711aa786235 )

sources: https://gist.github.com/jedie/95225b0ecb89f23688f3

Some zlib file sizes:
(EDIT big output removed -> https://gist.github.com/jedie/d650de636711aa786235 )

I try to compare... I used level=9 from zlib and use the lzma entry that used a similar time:
lib/pypy.vm.js

type file size time
zilb level=9 compressed.........: 1.89 MBytes compression time...: 0.95 sec.
lzma preset=3 compressed.........: 1.29 MBytes compression time...: 0.82 sec.

lib/pypy.vm.js.mem

type file size time
zilb level=9 compressed.........: 1.94 MBytes compression time...: 1.67 sec.
lzma preset=4 compressed.........: 1.31 MBytes compression time...: 1.48 sec.

So: similar time results in better compression with lzma... No surprise ;)

(Note: i used results from different runs. So the values vary a little bit here and there ;) )

jedie added a commit that referenced this issue Jun 12, 2015
@jedie jedie changed the title http://pypyjs.org/pypyjs-release/lib/pypy.vm.js.mem will not be cached?!? / compress with LZMA? compress with LZMA! (was: pypy.vm.js.mem will not be cached?!?) Jun 12, 2015
@jedie
Copy link
Contributor Author

jedie commented Jun 12, 2015

With 3b0e3e6 i created a new branch here: https://github.com/pypyjs/pypyjs.github.io/tree/lzma_test

There are three files:

  • compress_lzma.py - That will create the .lzma files from 'lib/pypy.vm.js' and 'lib/pypy.vm.js.mem'
  • test_lzma_workers.html request/decompress the .lzma files without Web Workers
  • test_lzma_WebWorkers.html does the same with Web Workers

In compress_lzma.py i used preset=3... This result in 2,88MB (both files and uncompressed is 19,2MB ;) )... Using a higher preset results in much more compression time, but didn't shrinks the files really more...

How to tryout:

I'm not a JavaScript Junkie. So maybe i have make a few Bugs. The Web Workers variant doesn't really work. It seems to take a very long time...
It seems also, that the LZMA-JS "on_progress" callback is buggy.

The no Web Workers variant works good here with Firefox 31.7.0 and chrom 43.0...
The output is:

request file 'pypy.vm.js_preset3.lzma'
request file 'pypy.vm.js.mem_preset3.lzma'

Request done, result type: [object ArrayBuffer]
uint8 length: 1492079
decompress 'pypy.vm.js_preset3.lzma'...
Decompressing: -100%

Request done, result type: [object ArrayBuffer]
uint8 length: 1538951
decompress 'pypy.vm.js.mem_preset3.lzma'...
Decompressing: -100%

File 'pypy.vm.js_preset3.lzma' decompressed in 4sec.
First decompressed output:
var Module;if(!Module)Module=(typeof Module!=="undefined"?Module:null)||{};var m...

File 'pypy.vm.js.mem_preset3.lzma' decompressed in 5sec.
First decompressed output:
5,9,3,0,80,-9,21,0,0,0,0,0,0,0,0,0,25,0,3,0,0,0,0,0,9,0,3,0,-64,-98,94,-52,7,0,0,0,100,101,98,117,103,58,32,0,0,0,0,0,9,0,3,0,72,42,31,78,15,0,0,0,79,112,101,114,97,116,105,111,110,69,114,114,111,114,58,0,0,0,0,0...

4 and 5sec sounds good, isn't it?
I don't know if this measuring is really right. Because of the async.
Think the 5 sec is the complete time for both files!

Chrome is a little bit faster: 2 and 2sec. (Maybe the non ESR version of firefox is also faster?!?)

Again, i'm not a JavaScript Junkie, so i didn't really know how to implement this in the pypy.js loading mechanism.

@rfk
Copy link
Contributor

rfk commented Jun 12, 2015

Great, thanks for diving into this! I'll try to make some time this weekend to take your work here and plug it into the pypy.js loading mechanism to see how it performs.

4 and 5sec sounds good, isn't it?

That's certainly good compared to a cold load of the uncompressed file!

The downside may be that we have to pay that overhead every time, even if the compressed file is loaded from the browser's cache. I was (somewhat naively) hoping that it would be around 1 second, and I guess Chrome's speed is not too far from that.

@jedie
Copy link
Contributor Author

jedie commented Jun 12, 2015

I run another tests on a different computer... It's a old Intel Q9550 (quad code with 2,83GHz) under Linux Mint...
The decompression is much slower. Both files around 13 Sec. :(

IMHO it's a question of existing bandwidth...

btw. found this: https://wiki.mozilla.org/LZMA2_Compression

@rfk
Copy link
Contributor

rfk commented Jun 14, 2015

Another option may be to use a compression algorithm that's faster but has poorer compression, such as LZ4 or snappy. Here's a quick comparison using lz4:

6.6M    pypy.vm.js.mem
1.9M    pypy.vm.js.mem.gz
2.6M    pypy.vm.js.mem.lz4
2.0M    pypy.vm.js.mem.lz4.gz

Doing a pure-javascript lz4 decompression of the memory would still save a lot of download time compared to loading the raw file, and IIUC would be substantially quicker to decompress than LZMA. And it could still be combined with gzip content-encoding for folks who are able to tweak their server setup appropriately.

@jedie
Copy link
Contributor Author

jedie commented Jun 14, 2015

Another candidate: http://liblzg.bitsnbites.eu/

I try to make a normal python package for lzg here: https://github.com/jedie/python-lzg
But i have no experience with ctypes and binary python packages...

@jedie
Copy link
Contributor Author

jedie commented Jun 14, 2015

Interesting benchmarks : https://quixdb.github.io/squash-benchmark/

--But no liblzg ;(--

EDIT: liblzg is there. Just named 'lzg'...

I choose "dataset" the "Tarred source code of Samba 2-2.3"
IMHO interesting machines are:

  • most power: "peltast" (for absolute ratio/speed)
  • lowest power/RAM: "beagleboard-xm" and "raspberry-pi-2"

Yes, lz4 seems to be really fast at decompression.
Interesting compression with fair "ratio to decompression speed":

  • doboz
  • lzham
  • zlib:deflat
  • lzo

Here a cleaned chat with "samba sources" on RPi2:

bildschirmfoto

And here on beagleboard:

bildschirmfoto

But i didn't found a JavaScript implementation of "doboz" and "lzham"...

zlib:deflat seems to be here: https://github.com/dankogai/js-deflate

for lzo i only found "miniLZO" in JavaScript here: https://github.com/abraidwood/minilzo-js

@jedie
Copy link
Contributor Author

jedie commented Jun 14, 2015

OK, i have done a first test: zlib:deflate is lighting fast compared to lzg!!!
I used the deflate source from: https://github.com/imaya/zlib.js

On the same machine, where lzg needs 12Sec:

request file 'pypy.vm.js_level9.deflate'
request file 'pypy.vm.js.mem_level9.deflate'

Request done, result type: [object ArrayBuffer]
convert to Uint8Array
OK, uint8 length: 1969526 Bytes
decompress...
done in 204ms to: 13293637 Bytes.
First decompressed output:
var Module;if(!Module)Module=(typeof Module!=="undefined"?Module:null)||{};var m...

Request done, result type: [object ArrayBuffer]
convert to Uint8Array
OK, uint8 length: 2027862 Bytes
decompress...
done in 157ms to: 6911400 Bytes.

204ms and 157ms is IMHO faster then we need, isn't it?

EDIT: compression:

_______________________________________________________________________________
Compress 'pypyjs-release/lib/pypy.vm.js' to './pypy.vm.js_level9.deflate'...
Compress with level=9 - (13293637 Bytes uncompressed)


compression time...:   1.70 sec.
uncompressed.......:  12.68 MBytes
compressed.........:   1.88 MBytes
compression ratio..:  14.82 %
_______________________________________________________________________________
Compress 'pypyjs-release/lib/pypy.vm.js.mem' to './pypy.vm.js.mem_level9.deflate'...
Compress with level=9 - (6911400 Bytes uncompressed)


compression time...:   2.89 sec.
uncompressed.......:   6.59 MBytes
compressed.........:   1.93 MBytes
compression ratio..:  29.34 %

Compared to your values:

6.6M    pypy.vm.js.mem
1.9M    pypy.vm.js.mem.gz
1.93M  pypy.vm.js.mem_level9.deflate
2.6M    pypy.vm.js.mem.lz4
2.0M    pypy.vm.js.mem.lz4.gz

EDIT: Test code is here: a2c3253

@rfk
Copy link
Contributor

rfk commented Jun 14, 2015

Wow, that's pretty impressive! I'd be willing to bet we can get the time down even more by special-casing some of the logic for our needs a well.

https://github.com/abraidwood/minilzo-js
On the same machine, where lzg needs 12Sec:

From a quick look at the source, this implementation works with arrays of integers rather than with native UInt8Array objects, which probably explains why it's so much slower.

@rfk rfk changed the title compress with LZMA! (was: pypy.vm.js.mem will not be cached?!?) Add transparent compression of the .mem file, to make deployment easier Jun 14, 2015
@jedie
Copy link
Contributor Author

jedie commented Jun 15, 2015

Can you try to compile lzham via emscripten to JavaScript? See: richgel999/lzham_codec#9

@rfk
Copy link
Contributor

rfk commented Jun 16, 2015

I'll try, but I haven't got a lot of experience using it with CMake-driven C++ projects. Will let you know how I go...

@jedie
Copy link
Contributor Author

jedie commented Jun 17, 2015

my try is here: richgel999/lzham_codec#9 (comment)

@jedie
Copy link
Contributor Author

jedie commented Jun 17, 2015

@jedie
Copy link
Contributor Author

jedie commented Jun 18, 2015

With e62d9c8 i hacked to test https://github.com/google/zopfli

zopfli should be compress more than zlib, but is compatible with it...

It seems it's not worth it:

Compress 'pypy.vm.js' with zlib level=9 to './pypy.vm.js_level9.deflate'
uncompressed.......:  12.00 MBytes
compression time...:   0.96 sec.
compressed.........:   1.00 MBytes
compression ratio..:  14.82 %

Compress 'pypy.vm.js.mem' with zlib level=9 to './pypy.vm.js.mem_level9.deflate'
uncompressed.......:   6.00 MBytes
compression time...:   1.67 sec.
compressed.........:   1.00 MBytes
compression ratio..:  29.34 %

===============================================================================

Compress 'pypy.vm.js' with zopfli to './pypy.vm.js_zopfli'
uncompressed.......:  12.00 MBytes
compression time...:  91.86 sec.
compressed.........:   1.00 MBytes
compression ratio..:  14.29 %

Compress 'pypy.vm.js.mem' with zopfli to './pypy.vm.js.mem_zopfli'
uncompressed.......:   6.00 MBytes
compression time...:  39.36 sec.
compressed.........:   1.00 MBytes
compression ratio..:  28.82 %

For this, i have used https://github.com/wnyc/py-zopfli
But the zopfli source are not up to date. So i have test also with a fresh https://github.com/google/zopfli checkout: values has not changed significantly.

This was referenced Jun 18, 2015
@jedie
Copy link
Contributor Author

jedie commented Jun 18, 2015

With b4a3cea i refactor my test code and run a complete test with pypy.vm.js + pypy.vm.js.mem and all levels of zlib, bzip2 and lzma

Results here: https://gist.github.com/jedie/d650de636711aa786235

I will cleanup this issues and remove all test results that exist in the gist ;)

@jedie
Copy link
Contributor Author

jedie commented Jun 18, 2015

So we need to find an extension that will (1) ensure the content-type is set to something binary like application/octet-stream or similar, and (2) will trigger sensible caching and gzipping behaviour from github pages. Any suggestion?

I have look at the WebAssembly examples:

In both cases the caching seems also not working...

@rfk
Copy link
Contributor

rfk commented Jun 18, 2015

I expect WebAssembly to be really useful for reducing the size fo the code download (js vs bytecode) but it probably won't have much effect on the size of the memory data download - if it's the same compiled program then it would require the same memory image layout. So anything we do here will still be useful in a WebAssembly world.

@jedie
Copy link
Contributor Author

jedie commented Jun 19, 2015

btw. I created a thread in "encode.ru" (Seems to be the biggest forum about data compression):

Currently no suggestions :( Maybe my English is too bad...

EDIT: Just forgot: zlib vs. bzip2:

bildschirmfoto

bzip2 makes also no sense. Is slower than LZMA with less compression ratio...

This is also consistent with my compress measurements:

pypy.vm.js:

    zlib level=5    0.40sec     2.02MB  15.93%
    bzip2 level=1   1.54sec     1.66MB  13.09%
    lzma preset=3   1.67sec     1.42MB  11.22%

pypy.vm.js.mem:

    zlib level=5    0.32sec     1.98MB  29.98%
    bzip2 level=1   1.12sec     1.94MB  29.42%
    lzma preset=3   0.96sec     1.47MB  22.27%

(Only best compress ratio/durations)

jedie added a commit to jedie/pypyjs.github.io that referenced this issue Jun 21, 2015
@jedie
Copy link
Contributor Author

jedie commented Jun 21, 2015

With jedie@4af3e54 i started hacking to request/decompress a deflate files...
Currently it doesn't work... My less javascript knowledge ...

@jedie
Copy link
Contributor Author

jedie commented Jun 26, 2015

So we need to find an extension that will (1) ensure the content-type is set to something binary like application/octet-stream or similar, and (2) will trigger sensible caching and gzipping behaviour from github pages. Any suggestion?

Back to this question.

The tests from #7 shows that .tar.gz and .zip is a usable file ending, that "activate" the caching ;)

@jedie
Copy link
Contributor Author

jedie commented Jun 29, 2015

With jedie@e0b2cb1 i have made a rudimentary worked version.

I have only worked on the editor page with a local copy of pypy.js here: https://github.com/jedie/pypyjs.github.io/blob/gh-pages/pypy_compression_test.js

try out here: https://jedie.github.io/pypyjs.github.io/editor.html

It fetches "./download/pypyjs.zip" witch contains these two files:

  • "pypy.vm.js"
  • "pypy.vm.js.mem"

I have a little problem to use the mem file. But i found this: emscripten-core/emscripten#3187 and use memoryInitializerRequest see: https://kripken.github.io/emscripten-site/docs/tools_reference/emcc.html?highlight=memoryinitializerrequest (The --memory-init-file note box)
But it seems that i made it not in a best-practise-way ;)

The init process blocks the browser. Maybe while processing the mem file?!?

And there is somewhere a bug, running the editor VM again. Maybe around the memoryInitializerRequest stuff...

@rfk
Copy link
Contributor

rfk commented Jun 30, 2015

@jedie thanks for continuing to push on this. To be honest, I'm dubious about zipping the .js file because webservers are already good at this, and because we'll have to pay the cost of unzipping it on every startup even if it's loaded from the cache. I'm willing to be convinced by numbers though :-)

I'm also hacking on a hand-written asmjs unzip implementation for loading the memory file, similar to #148 but using zip. Will be interesting to see how it compares performance-wise to the jszip stuff.

@jedie
Copy link
Contributor Author

jedie commented Jun 30, 2015

Yes, .zip was not my first choice. But it seems that this is currently the best choice :(
Maybe lzham is a better choice, if there is a javascript implementation...

I just compare my startup with the "normal" startup:

With empty cache, it's: ~9sec vs. ~11sec on my ~14MBit downlink.
With filled cache: ~6.5sec vs. ~11sec

I would like to see your "hand-written asmjs unzip implementation" compared to my tests.

@jedie
Copy link
Contributor Author

jedie commented Jun 30, 2015

Hm! I just found: https://github.com/ukyo/zlib-asm and the benchmark: http://imaya.github.io/demo/zlib/index.html than compares:

Then, i looked more on your work here: pypyjs/pypyjs@a1cb945#commitcomment-11928467

You use a sub-set of zlib:deflate... Would be interesting to compare this with zlib-asm,a full deflate implementation in asm.js...

EDIT: With jedie@263862f i appy a bugfix: Now the "run" worked, too. I just cache the .mem file content into a global variable -> jedie@263862f#diff-cff69e84371ac601d670eba337a2b1d8R119
Is there a better way to cache this?

EDIT2: With jedie@151e18e i cache 'index.json' in similar way.

@rfk
Copy link
Contributor

rfk commented Jun 30, 2015

With empty cache, it's: ~9sec vs. ~11sec on my ~14MBit downlink.
With filled cache: ~6.5sec vs. ~11sec

I assume the smaller number is when things are zipped - so significantly faster in both cases?

Hopefully I'll get a chance to run some tests like this myself at some stage this week.

@jedie
Copy link
Contributor Author

jedie commented Jul 1, 2015

TODO: also include /lib/modules/index.json into the .zip file!

I've been thinking about "Why not just put all files into a .json" (from #7 (comment) )

Actually, there are different questions to be answered:

  • The container format (.tar, .zip, .json etc.) ?
  • The compression format (lzham, deflate etc.) ?
  • own compression or server/browser gzip ?
  • If server/browser gzip: which file extension works with default browser settings ?
  • own container or native .json parsing in browser ?

We can merge request by put these "init" files into one (.tar, .zip, .json etc.):

  • /lib/modules/index.json
  • /lib/pypy.vm.js
  • /lib/pypy.vm.js.mem

Use .json container:

  • fine file extension: .json will enable server/browser native gzip by default, isn't it?
  • browser can parse json interiorly (Should be the fastest solution?)

.json disadvantage:

  • json will blow up the total size (own syntax and additional escape characters)
  • binary data in json will blow very much with base64

To prevent the .json size bloat: There are some other "binary serialization format" stuff out there. e.g.:

homepage javascript implementation python module
http://msgpack.org/ https://github.com/msgpack/msgpack-javascript https://github.com/msgpack/msgpack-python
http://ubjson.org/ https://github.com/artcompiler/L16 https://github.com/brainwater/simpleubjson

But then, we have the same problem as with .zip or .tar:

  • we skip the native support of .json parsing
  • we need to find a file ending, that enable the gzip compression

Will this be faster then a uncompressed .tar usage?

The current .zip solution can be also enhanced: Currently i use https://github.com/Stuk/jszip this used https://github.com/nodeca/pako for decompression.

Using .tar and pako was not a good combination: See results here: #7 (comment)

Maybe using .json (or msgpack/ubjson) with pako is maybe faster than the current solution?!?

This should also enhanced the compression result. Because it's something like the 7-Zip "Solid compressing" (As I mentioned at the .tar.gz solution)

Conclusion: I will try to create a test for "nativ .json usage" today...

jedie added a commit to jedie/pypyjs.github.io that referenced this issue Jul 1, 2015
@jedie
Copy link
Contributor Author

jedie commented Jul 1, 2015

With jedie@e494644 i implemented a plain .json test.

I only change the way how python packages are loaded. Not the init part, this stays by fetching pypyjs.zip...

Just hit the "run" button on the page to compare:

Results with empty caches:

  • .zip solution: Run in 1.7sec.
  • .json solution: Run in 5.5sec.

With filled caches (just hit run button several times):

  • .zip solution: Run in ~250ms
  • .json solution: Run in ~2.5sec.

Hm. parsing json is so slow in firefox 38.0 under linux?!? Maybe i made something wrong?!?

btw. the github-server compression is not the best, but ok:

  • origin platform.json file is 1.3MB
  • self made platform.zip is 328.7KB (from filemanager. But firebug display 321,0KB)
  • send from github: 351,5 KB (displayed in firebug)

A very strange idea:

  • Just use .zip as a container without compression
  • save it as: .zip.js so that server serve as a gzipped the fil

:-/

@jedie
Copy link
Contributor Author

jedie commented Jul 2, 2015

Now i can compare chrome vs. firefox...

Results with empty caches:

  • .zip solution: Run in 1.7sec.
  • .json solution: Run in 5.5sec.

With filled caches (just hit run button several times):

  • .zip solution: Run in ~250ms
  • .json solution: Run in ~2.5sec.

Wow. Chrome is faster:

  • .json solution with empty cache: Run in ~1.7sec.
  • .json solution with filled cache: Run in ~1.1sec.

It the JSON parser in Chrome so much faster than in Firefox?!?

The .zip solution doesn't currently not work in Chrome :(
EDIT: Bugfix with jedie@d705210

  • .zip solution with empty cache: Run in ~1.7sec.
  • .zip solution with filled cache: Run in ~1.2sec.

Seems to run a tick slower, but it's almost equal.

@jedie
Copy link
Contributor Author

jedie commented Jul 2, 2015

OK, i try to figure out where the bottleneck is. The "runtime analyser" in firefox and chrome didn't help me here.

So i add some "debug(duration)" in JS with: jedie/pypyjs.github.io@d705210...548a1c1

There my results:

Firefox ".zip" solution: https://jedie.github.io/pypyjs.github.io/editor.html

*** hit "run" button:
./download/platform.zip loaded in 445ms
.zip parsed in 15ms
created all files in 182ms
-->> Run in 5.9sec.

*** hit "run" button, again:
./download/platform.zip loaded in 12ms
.zip parsed in 7ms
created all files in 171ms
-->> Run in 5.1sec.

Chrome ".zip" solution: https://jedie.github.io/pypyjs.github.io/editor.html

*** hit "run" button:
./download/platform.zip loaded in 443ms
.zip parsed in 19ms
created all files in 64ms
-->> Run in 1.4sec.

*** hit "run" button, again:
./download/platform.zip loaded in 259ms
.zip parsed in 36ms
created all files in 48ms
-->> Run in 1.1sec.

Firefox ".json" solution: https://jedie.github.io/pypyjs.github.io/json_test.html

*** hit "run" button:
./download/platform.json loaded in 506ms
JSON.parse() in 10ms
created all files in 78ms
-->> Run in 5.9sec.

*** hit "run" button, again:
./download/platform.json loaded in 19ms
JSON.parse() in 5ms
created all files in 74ms
-->> Run in 5.0sec.

Chrome ".json" solution: https://jedie.github.io/pypyjs.github.io/json_test.html

*** hit "run" button:
./download/platform.json loaded in 368ms
JSON.parse() in 6ms
created all files in 13ms
-->> Run in 1.2sec.

*** hit "run" button, again:
/download/platform.json loaded in 277ms
JSON.parse() in 8ms
created all files in 23ms
-->> Run in 1.1sec.

IMHO this duration outputs didn't cover the bottleneck :(
Any idea?

EDIT: Just some "numbers game": I added up all duration (but not the "run in" times):
firefox+chrome .zip solution: 1701ms
firefox+chrome .json solution: 1387ms

@rfk
Copy link
Contributor

rfk commented Jul 3, 2015

@jedie this platform.zip/platform.json test is just for loading the files necessary to import platform, is that correct?

IMHO this duration outputs didn't cover the bottleneck :(

I'm not seeing the same results - when I run your tests, AFAICT with a warm cache it runs in a few hundred milliseconds and almost all of that time is spend in the file loading routines.

@jedie
Copy link
Contributor Author

jedie commented Jul 5, 2015

I have only generate and add platform to the git repro. So, you can only import this module...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants