Using a predefined dictionary #69

omrio opened this Issue Jan 19, 2016 · 8 comments


None yet

4 participants

omrio commented Jan 19, 2016


Kudos for the package :-)

I'd like to use it in my code, to compress many small JSON's, which share a lot of the strings (field names), but I'd like to compress each one separately (to be able to save memory and to access each one of them on demand without extracting the others).
I think a good solution for that would be to use a single predefined compression-dictionary, which is optimized for these JSON's and then the compression would be optimal for them.

Now, I saw that the readme file mentions that deflateSetDictionary, inflateSetDictionary and inflateGetDictionary are not supported by pako.

From what you know, how difficult should it be to add them to pako? Is there a special reason why they weren't implemented?

Thanks in advance,

puzrin commented Jan 19, 2016
  1. I think, implementation should not be very difficult, because all used components should be already available. It will be better if you dig sources yourself for such estimates. JS code is similar to original one.
  2. We did not implemented it due time restrictions, and because we had no ideas where to apply and how to test it.
puzrin commented Jan 27, 2016

Close as answered.

@puzrin puzrin closed this Jan 27, 2016

If you're interested in compressing small bits of JSON you can also read this and go on from there to Ayende's other linked articles from within the post

@diasdavid diasdavid referenced this issue in libp2p/js-libp2p-spdy Mar 24, 2016

Testing libp2p-spdy on the browser #6

2 of 2 tasks complete

@puzrin would you consider reopening this issue? I'm working on getting this dictionaries working so that I can the spdy stream multiplexer (spdy-transport) in the browser with libp2p-spdy.

Started by implementing the dictionary tests available on Node.js core with pako API, diasdavid@a413b13

Since I'm not familiar with the code, getting it ready is taking more time than I initially expected, any word of advice or guidance would be extremely appreciated.

puzrin commented Mar 25, 2016

@diasdavid TBH i'm not sure it's a good idea to put this feature into top-level API (very specific). Why not use those files directly

puzrin commented Mar 25, 2016

lib/zlib has some missed methods, because we had no ideas how to use/test those, but PR will be accepted.


@puzrin seems like that in order to 'expose' them in the top-level API, they would have to be implemented first in lib/zlib. My goal is to be able to make browserify-zlib to be able to shim completely the Node.js zlib API.

lib/zlib has some missed methods, because we had no ideas how to use/test those, but PR will be accepted.

For the ones respective to the dictionary, wouldn't using the same Node.js zlib tests help achieve reasonable guarantees?

puzrin commented Mar 26, 2016

I don't know. Not familiar with it.

Anyway, i think it worth to start porting the rest to lib/zlib + add tests. That can be merged without problem. Then we can consider top-level api change, if that's really needed and useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment