You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For example I have tens of thousands collections with thousands docs each.
How will this impact on
– index performance?
– search performance?
– memory usage?
How I can benchmark this?
The text was updated successfully, but these errors were encountered:
@gut4 As far as the underlying design goes, thousands of collections is not a problem as the in-memory overhead associated with each collection is minimal. All documents (across all collections) are stored in a single RocksDB store on disk, so disk performance is also the same as storing all documents in a single collection. We also only do O(1) per-collection lookups during indexing and searching so no impact there as well.
As for benchmarking, you can create a few thousand collections and index them with the kind of data you would be using in your actual production application. That would give you a sense of how the indexing performs. To test the search performance, you can run a benchmark using a tool like siege.
For example I have tens of thousands collections with thousands docs each.
How will this impact on
– index performance?
– search performance?
– memory usage?
How I can benchmark this?
The text was updated successfully, but these errors were encountered: