-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Evaluate options for a language-independent checkpoint/serialization format for the CLA #333
Comments
Thrift is a PITA both conceptually and in implementation. --Please excuse brevity, sent from phone.--
|
@iandanforth Yes I have heard (and experienced) similar opinions. Frank mentioned the same in JIRA I believe. Something like MessagePack is easy to use but is harder to maintain and doesn't work as well across languages (as far as I can tell). I am leaning towards protobufs or similar. I quite like Cap'n Proto and there is active development but previously is was still a bit raw. One potential advantage of it over PB is that it has planned support for memory mapping that could make deserialize-run-serialize operations very fast (this is the current Grok use case). |
Here is my branch with initial Python spatial pooler test: The files are in My initial results weren't very good. I suspect there is type conversion going on. It would probably be better to do the first tests in C++ where it is more obvious what is happening. And I wasn't doing thorough profiling, just saw that the time to create the model and the time to feed a record in were longer. From a theoretical standpoint, I am fairly confident that we can make serialization/deserialization and runtime both faster though. |
👍 |
There are currently four different options we want to measure times for: C++ and Python protocol buffers and C++ and Python Cap'n Proto buffers. There are two ways to use these. In the simple scenario, we leave the code pretty much the same and simply create and populate the buffers when we need to serialize. The other is to actually use the buffer in memory during execution. This doesn't work for some fields like sparse matrices but it works for most everything else. In the former case, we want to measure the times for the following operations:
In the case that we use the buffer in memory during execution, we would want to measure the time it takes to run records through in addition to the times for copying/serializaing/deserialization. Finally, when doing these timing tests it is important to run some records through the pre-serialization and post-de-serialization objects and compare the results to ensure that everything is implemented correctly (wouldn't be fair timing tests is some pieces were left out!). |
Yeah, seen the code and it's taking the approach of "actually use the buffer in memory during execution". |
It seems that Cap’n Proto does the encoding when feeding the data, and decoding on retrieval:
So, I guess it's definitely not an option to use Cap’n Proto as internal structure of SP and such, it will slow things down. |
Captain Proto is currently C++11 only and I haven't been able to get it to link with the current nupic.core (or even Marek's C++11branch) Performance with protocol buffers looks good: both the stages of creating the protocol buffer and serializing, as well as deserialization and loading variables back into the class fields were about 2 times faster than the current implementation. Right now my implementation doesn't use the protobuf object in memory throughout SP execution, but allocates and populates it when the save function is called. |
I'm wondering... what kind of API should we need for language bindings? It also depends on the medium to communicate upon; dynamic linking mechanism or mere network packets. We can do binding in multiple levels. Does the efficiency matter? The speed can be optimized for throughput, latency, operations per second, ..., etc. |
When I get back from vacation, I want to have a discussion about updating to C++11. That might change the implementation of this issue significantly. |
There's #130 taking the approach of using google protocol buffer. Just wonder if anyone has noticed and evaluate FlatBuffers which is also the successor of protocol buffer and developed by google. At first glance, it also provides "Access to serialized data without parsing/unpacking" and has miscellaneous efficiency improvements just like Cap'n proto . |
@scottpurdy Would you call this ticket complete? |
rhyolight commented a day ago
If it's complete, what's the conclusion? |
We don't have a decision on this yet. I think it makes sense to keep tracking here. I will create a follow up issue to track the implementation to be done after we finalize the decision. |
Now that we have a C++11 nupic.core I am going to attempt Capn Proto again. |
We have more motivation for this issue from #1231 which is a somewhat serious bug in NuPIC. |
@scottpurdy Once C++11 is finished across |
@rhyolight - that is my current plan, yes |
Shall we close this yet? Matt Taylor On Fri, Oct 3, 2014 at 10:28 AM, Scott Purdy notifications@github.com
|
Closing, assuming we're going with Cap'n Proto in #1336. |
The current CLA model checkpoint uses the pickle module. As we move towards multiple language support and more external model sharing, we should define a language-independent format for serializing the CLA.
The major objectives for this would be cross-language implementation (i.e. we don't have to create serialize functions separately for each language), speed, checkpoint size, and ease of development and versioning.
Status
The text was updated successfully, but these errors were encountered: