You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With a large data.json file, we'll never be able to truncate the disk store on bundle activation: the value we attempt to write is larger than what a single txn can hold; and the commit-and-retry logic will fail: the commit is empty (no problem) but the retry with just yield the same error.
What we could do here is to enhance the bundle iterator to split values according to their value size: if it's above X, we'll add iteratees for the substructures. For objects,
{ "a": ..., "b": ... , ... }
we'll parse that JSON blob, and add iteratees for each value, with path = old_path + "a", path = old_path + "b", etc.
Analogously for arrays.
The text was updated successfully, but these errors were encountered:
With a large data.json file, we'll never be able to truncate the disk store on bundle activation: the value we attempt to write is larger than what a single txn can hold; and the commit-and-retry logic will fail: the commit is empty (no problem) but the retry with just yield the same error.
What we could do here is to enhance the bundle iterator to split values according to their value size: if it's above X, we'll add iteratees for the substructures. For objects,
we'll parse that JSON blob, and add iteratees for each value, with path = old_path + "a", path = old_path + "b", etc.
Analogously for arrays.
The text was updated successfully, but these errors were encountered: