You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rebuilding entities is time consuming, particularly for large structures. We have had success with caching these outside of the store, but this can be done inside.
For in-memory, we can include a new index mapping entity nodes (the :db/id) to the entity. Most changes will be done via modifications to an existing structure that was already added, so there is likely to be code sharing. The main exception to this will be data re-acquired through APIs. It's possible to look for diffs between structures, but this will be a lot of effort, and should only be considered if we see significant memory use.
For on-disk usage, we can write entities to an append-only file, using a Clojure serialization such as fressian. To avoid a new index, the SPO index can accept an internal predicate that will connect entities IDs to the latest serialization location, in the same way the data in the data pool is referenced. The new predicate will be filtered out of triple results.
All of this can be handled via existing APIs accessing the modified data structures
The text was updated successfully, but these errors were encountered:
fressian works fine, and returns a ByteBuffer with the encoded data. Looking at the code, there isn't a lot beyond what our encoding does (and it's no smaller), with the exception of collections. If we expand the codec to include exceptions then we can avoid this dependency.
Entity updates will need to be translated to calls to assoc/dissoc
Rebuilding entities is time consuming, particularly for large structures. We have had success with caching these outside of the store, but this can be done inside.
For in-memory, we can include a new index mapping entity nodes (the
:db/id
) to the entity. Most changes will be done via modifications to an existing structure that was already added, so there is likely to be code sharing. The main exception to this will be data re-acquired through APIs. It's possible to look for diffs between structures, but this will be a lot of effort, and should only be considered if we see significant memory use.For on-disk usage, we can write entities to an append-only file, using a Clojure serialization such as fressian. To avoid a new index, the SPO index can accept an internal predicate that will connect entities IDs to the latest serialization location, in the same way the data in the data pool is referenced. The new predicate will be filtered out of triple results.
All of this can be handled via existing APIs accessing the modified data structures
The text was updated successfully, but these errors were encountered: